close
close

The Meta’s war efforts involve not only real battles, but also the fight for open source AI.

The Meta’s war efforts involve not only real battles, but also the fight for open source AI.

Hello and welcome to Eye on AI! In this newsletter… Intel’s disappointment with Gaudí… Prime Video gets AI… OpenAI and Anthropic hiring news… Sleep pays… and nuclear failures.

The Meta wants the US government to use its AI – even the military.

Yesterday, the company said it had assembled a smorgasbord of partners to do so, including consulting firms such as Accenture and Deloitte, cloud service providers such as Microsoft and Oracle, and defense contractors such as Lockheed Martin and Palantir.

Political leader Nick Clegg wrote in blog post that Oracle was tuning Meta’s Llama AI model to “synthesize aircraft maintenance documents so technicians can more quickly and accurately diagnose problems,” while Lockheed Martin uses it for code generation and data analysis. Scale AI, a defense contractor that coincidentally included Meta among its investors, “customizes Llama to support specific national security team missions, such as planning operations and identifying adversary vulnerabilities.”

“As an American company that owes much of its success to the entrepreneurial spirit and democratic values ​​that the United States espouses, Meta wants to play its part in ensuring the safety, security and economic prosperity of America—and its closest allies too,” exclaimed former Vice-President Prime Minister of Great Britain.

But Clegg’s post wasn’t just about positioning Meta AI as the patriot’s choice. Perhaps more than anything else, it was an attempt to present Meta’s open source version of AI as correct and desirable.

Meta has always called Llama “open source” in the sense that it gives out not only the model, but also its weights—parameters that make modification easier—as well as various other security tools and resources.

Many in the traditional open source software community disagree with Meta’s “open source” language, mainly because the company does not disclose the training data it uses to create its Llama models and because it places restrictions on use Llama – which is most appropriate. In the context of Monday’s announcement, Llama’s license states that it must not be used for military purposes.

The Open Source Initiative, which coined the term “open source” and continues to act as its steward, recently released open source artificial intelligence definition for these reasons it clearly does not apply to the Lama. The same applies to the Linux Foundation, whose equally fresh definition not exactly the same as OSI, but still explicitly requires information about the training data and the ability for anyone to reuse and improve the model.

This is likely why Clegg’s post (which mentions “open source” 13 times) suggests that Llama’s deployment to US national security “will not only support the prosperity and security of the United States, but will also help set American open source standards in global race for leadership in AI.” According to Clegg, there will soon be a “global open source standard for AI models” – think Android, but for AI – and it will “form the basis for AI development around the world and will be part of technology, infrastructure and manufacturing, and global finance and email.” -commerce”.

If the US drops the ball, Clegg suggests, China’s open-source approach to artificial intelligence will become the global standard.

The timing of this lobbying extravaganza is a bit awkward, however, as it comes just days after Reuters reports this. that Chinese researchers associated with the military were using a year-old version of Llama as the basis for ChatBIT, a tool for processing intelligence data and aiding operational decision-making. This is something like what the Meta is now allowing military contractors to do to Lama in the US, only without her permission.

There are many reasons to be skeptical about how much impact the Sinicization of Lama will actually have. Given the rapid pace of AI development, the version of Lama in question (13B) is far from advanced. Reuters reports that ChatBIT “has been found to outperform several other artificial intelligence models that are about 90% as effective as OpenAI’s powerful ChatGPT-4,” but it’s unclear what “capable” means here. It’s not even clear whether ChatBIT is actually used.

“In the global competition in artificial intelligence, the supposed role of a single and outdated version of the American open source model is irrelevant, as we know that China is already investing more than $1 trillion to surpass the US technologically, and Chinese technology companies are releasing their own open AI models as fast (or even faster) than companies in the US,” Meta said in a statement in response to the Reuters article.

Not everyone is so convinced that the Llama-ChatBIT connection is irrelevant. United States House Select Committee on the Communist Party of China made it clear X that he took note of this story. Chairman of the House Foreign Affairs Committee, Rep. Michael McCaul (R-TX), also tweeted that the CCP “exploiting American artificial intelligence applications such as Meta’s Lama for military purposes” demonstrated the need for export controls (in the form ENFORCEMENT OF LEGISLATION) to “prevent American artificial intelligence from falling into the hands of China.”

Mehta’s statement on Monday was hardly a reaction to the episode – that would be a hell of a lot of partnerships to put together in a couple of days – but it was also clearly motivated in part by the reaction that followed the Reuters story. .

There are live battles going on not only over the definition of “open source AI” but also over the survival of the concept in the face of the geopolitical struggle between the US and China. And these two battles are connected. As explained by the Linux Foundation in Whitepaper 2021Open source encryption software may not comply with U.S. export restrictions unless it is made “publicly available without restrictions on its further distribution.”

Meta, of course, wouldn’t want the same logic to apply to AI, but in that case it may be much more difficult to convince the US that a truly open, “open source” AI standard is in its national security interests.

More news below.

David Meyer
[email protected]
@superglaze

Request your invitation For Global Fortune Forum in New York on November 11-12. Speakers include Honeywell CEO Vimal Kapoor and Lumen CEO Kate Johnson, who will discuss the impact of artificial intelligence on work and the workforce. Qualtrics CEO Zig Seraphin and Eric Kutcher, McKinsey senior partner and North America chairman, will discuss how companies can build the data pipelines and infrastructure they need to compete in the age of artificial intelligence.

AI IN THE NEWS

Intel’s disappointment in Gaudi. Intel CEO Pat Gelsinger admitted As of last week, the company will miss its $500 million revenue target for its Gowdy AI chips this year. Gelsinger: “The overall adoption of Gaudi has been slower than we expected, as the rate of adoption was impacted by the product transition from Gaudi 2 to Gaudi 3 and the ease of use of the software.” Given that Intel was telling Wall Street earlier this year of a possible $2 billion deal for Gowdy before it lowered its expectations to that $500 million figure, that’s doesn’t reflect well about a company experiencing difficulties.

Prime Video will receive artificial intelligence. Amazon is adding an AI-powered feature called X-Ray Recaps to its Prime Video streaming service. The idea is to help viewers remember what happened in previous seasons of the shows they’re watching, or specific episodes, or even snippets of episodes, by using fences. supposedly protective against spoilers.

OpenAI and Anthropic hiring news. Caitlin Kalinowski, who previously led the Meta augmented reality glasses project, is joining OpenAI to lead its robotics and consumer hardware efforts. TechCrunch Reports. OpenAI has also hired serial entrepreneur Gabor Chelle, one of the co-founders of defunct Twitter/X competitor Pebble, is working on some secret project. Meanwhile, Alex Rodriguez, former co-founder and CEO of self-driving truck developer Embark, is joining Anthropic. Rodriguez published on X that he will work as an AI alignment researcher alongside recent OpenAI refugees Jan Leike and John Shulman.

LUCK ON AI

ChatGPT releases search engine, first salvo in brewing war with Google for dominance of the AI-powered Internet — Paolo Confino

Top LLM schools have accessibility blind spots, according to startup Evinced.— Ellie Garfinkle

Amazon’s CEO has hinted at how the new AI-powered version of Alexa will compete with chatbots like ChatGPT.— Jason Del Rey

Countries looking to gain an edge in AI should pay close attention to India’s whole-of-society approach.— Arun Subramanian (commentary)

AI CALENDAR

October 28-30: Voice and Artificial Intelligence, Arlington, VA.

November 19–22: Microsoft Ignite, Chicago

December 2–6: AWS re:Invent, Las Vegas

December 8–12: Neural Information Processing Systems (Neurips) 2024, Vancouver, BC

December 9-10: Fortune Brainstorm AI, San Francisco (register Here)

A LOOK AT AI RESEARCH

Sleep pays off. Google’s team of cybersecurity analysts is coordinating with DeepMind on an LLM-based agent called Big Sleep, which they say has found its first real-world vulnerability: a vulnerable bug in the ubiquitous SQLite database engine.

Fortunately, the vulnerability was only present in the engine’s open source development branch, so users were not affected – the SQLite developers patched it as soon as Google told them about it. “Discovering vulnerabilities in software before it is released means that attackers have no way to compete: vulnerabilities are addressed before attackers have a chance to exploit them.” Google researchers wrote.

They stressed that these are experimental results and Big Sleep may not yet be able to outperform a focused automated software testing tool. However, they suggested that their approach could one day lead to “an asymmetrical advantage for defenders.”

NUTRITION FOR THE BRAIN

Nuclear failures. Financial Times reports that Meta had to abandon plans to build an artificial intelligence data center near a nuclear power plant somewhere in the US (details remain scant) because rare bees were discovered at the site.

There is currently a big push to power AI data centers with nuclear power due to its 24/7 reliability, but also because big tech companies are having to close the loop on meeting AI’s massive electricity needs without compromising their decarbonization commitments. However, failures abound.

In plans similar to those of Meta, Amazon earlier this year bought a data center located near the Susquehanna nuclear power plant in Pennsylvania. But regulators on Friday rejected The plant owner’s plan to give Amazon all the power it needs from the plant’s reactors – up to 960 megawatts instead of the 300 megawatts already allowed – because it could push up prices for other customers and possibly impact grid reliability.