Hello and welcome to Eye on AI! In this newsletter… Intel’s Gaudi disappointment… Prime Video gets AI… OpenAI and Anthropic new hires… Sleep pays… and nuclear setbacks.
Meta wants to get the US government to use its AI, as well as the military.
The company said yesterday that it has assembled a group of partners for the effort, including consultancies such as Accenture and Deloitte, cloud providers such as Microsoft and Oracle, and defense contractors such as Lockheed Martin and Palantir.
Politician Nick Clegg wrote a blog post It said it was tweaking Oracle Meta’s Llama AI model to “synthesize aircraft maintenance documents so technicians can diagnose problems faster and more accurately,” while Lockheed Martin is using it to generate code and analyze data. Scale AI, a defense contractor that counts Meta among its investors, is “tuning Llama to support specific missions of national security teams, such as planning operations and identifying adversary vulnerabilities.”
“As an American company, and one that owes its success in large part to the entrepreneurial spirit and democratic values espoused by the United States, Meta wants to do its part to support the safety, security and economic prosperity of America, and its closest allies,” said the British former deputy prime ministers.
But Clegg’s message wasn’t just about positioning Meta AI as a patriotic option. Perhaps more than anything else, it was an attempt to brand Meta’s version of open source AI as correct and desirable.
Meta has always positioned Llama as “open source” because it provides not only the model but also its weights—parameters that can be changed more easily—along with many other security tools and resources.
Many in the traditional open source software community disagree with Meta’s “open source” framing, mainly because the company doesn’t disclose the training data it uses to create its Llama models, and—most importantly—puts restrictions on Llama’s use. in the context of Monday’s announcement, Llama’s license states that it should not be used in military applications.
The Open Source Initiative, which coined the term “open source” and continues to act as its steward, has recently released a definition of open source AI this is clearly not the case with Llama for these reasons. Ditto Linux Foundation, whose equally cool the definition it is not the same as OSI, but it still clearly requires information about the training data, and the ability for anyone to reuse and improve the model.
That’s probably why Clegg’s post (which calls itself “open source” 13 times in its body) suggests that Llama’s US national security deployments will “not only support US prosperity and security, but also help set US open source standards. The global race for AI leadership.” Per Clegg, “a global open-source standard for AI models” is coming (think Android, but for AI) and will “form the foundation of AI development worldwide and will be the foundation of technology, infrastructure and in manufacturing, and in the world of finance and trade”.
Clegg suggests that if the US drops the ball, China’s approach to open source AI will become that global standard.
However, the timing of this lobbying spectacle is a little inconvenient as it comes within a few days. Reuters reported Researchers linked to the Chinese military are said to have used a year-old version of Llama as the basis for ChatBIT, an intelligence processing and operational decision support tool. This is what Meta allows military contractors to do with the US Llama, only without its permission.
There are many reasons to be skeptical about the actual impact of the sinicization of the Llama. Given the rapid pace of AI development, Llama’s version (13B) is far from state-of-the-art. Reuters says that ChatBIT was “found to outperform other AI models with 90% skill as OpenAI’s powerful ChatGPT-4,” but it’s not clear what “capable” means here. It is unclear whether ChatBIT is actually being used.
“In the global competition over AI, the supposed role of a single, outdated version of an American open source model is irrelevant to the fact that China is investing more than $1 trillion to technologically overtake the US, and Chinese tech companies are developing their open AI models as fast or faster than US companies. they are fired,” Meta said in response to the Reuters article.
Not everyone is so convinced that the Llama-ChatBIT connection is irrelevant. US House Select Committee of the Chinese Communist Party X has made it clear that he paid attention to the story. The chairman of the House Committee on Foreign Affairs, Rep. Michael McCaul (R-TX), also he tweeted The need for export controls was demonstrated by the CCP’s “exploitation of US AI applications such as Meta’s Llama for military use” ( The bill of the ETITO Act) “to keep American AI out of China’s hands.”
Meta’s announcement on Monday was likely a reaction to this episode—there would be a lot of partnerships coming together in a couple of days—but it’s also motivated by the kind of backlash that followed the Reuters story. .
There are direct battles not only to define “open source AI,” but also to survive the concept in the face of the US-China geopolitical battle. And these two struggles are connected. Linux Foundations a 2021 white paperopen source encryption software may fail US export restrictions “unless it is made publicly available without restrictions on its dissemination.”
The meta would certainly hate to apply the same logic to AI, but in this case it may be much harder to convince the US that an “open source” AI standard is actually in the national security interest.
More news below.
David Meyer
david.meyer@fortune.com
@superglaze
Request your invitation for Fortune Global Forum In New York, from November 11 to 12. Speakers include Honeywell CEO Vimal Kapur and Lumen CEO Kate Johnson, who will discuss the impact of AI on work and the workforce. Qualtrics CEO Zig Serafin and Eric Kutcher, McKinsey Senior Partner and North America President, will discuss how companies can build the data pipelines and infrastructure they need to compete in the AI era.
OH ON THE NEWS
Intel’s Gaudi disappointment. Intel CEO Pat Gelsinger accepted last week that the company will not hit its $500 million revenue target for Gaudi AI chips this year. Gelsinger: “The overall adoption of Gaudi has been slower than we anticipated, as the adoption rate was affected by the transition of the product from Gaudi 2 to Gaudi 3 and the ease of use of the software.” Gaudi earlier this year told Wall Street that Intel was on the way to a $2 billion deal, before lowering his expectations to $500 million. it does not reflect well over the struggling company.
Prime Video gets AI. Amazon is adding an AI-powered feature called X-Ray Recaps to its Prime Video streaming service. The idea is to help viewers remember what happened in previous seasons of the shows they’re watching—or specific episodes, or even parts of episodes—with guardrails. supposedly protecting against spoilers.
New OpenAI and Anthropic hires. Caitlin Kalinowski, who previously led Meta’s augmented reality glasses project, is joining OpenAI to lead its robotics and consumer hardware efforts. TechCrunch reports. It has OpenAI also hire Serial entrepreneur Gabor Cselle, co-founder of defunct Twitter/X rival Pebble, to work on a secret project. Meanwhile, Alex Rodrigues, former co-founder and CEO of self-driving truck developer Embark, will join Anthropic. Rodrigues Posted in X He will work as an AI alignment researcher alongside recent OpenAI refugees Jan Leike and John Schulman.
GOOD FORTUNE OMG
ChatGPT has launched a search engine in a beer war with Google for AI-powered Internet supremacy. — by Paolo Confino
LLM majors have accessibility blind spots, says data from startup Evinced— by Allie Garfinkle
Amazon’s CEO dropped a big hint about how a new AI version of Alexa will compete with chatbots like ChatGPT.—By Jason Del Rey
Countries looking to gain an edge in AI should pay close attention to the Indian society as a whole—By Arun Subramanian (Commentary)
YOU HAVE A CALENDAR
From October 28 to 30: Voice and AI, Arlington, Va.
November 19-22: Microsoft Ignite, Chicago
December 2-6: AWS re:Invent, Las Vegas
December 8-12: Neural Information Processing Systems (Neurips) 2024, Vancouver, British Columbia
December 9 to 10: Fortune Brainstorm AI, San Francisco (register here)
EYE AI RESEARCH
Sleep pays off. A team of Google cybersecurity analysts has been coordinating with DeepMind on an LLM-powered agent called Big Sleep, and they say they’ve discovered its first real-world vulnerability: an exploitable flaw in the ubiquitous SQLite database engine.
Fortunately, the bug was only in one developer branch of the open source engine, so users weren’t affected—the SQLite developers fixed it as soon as Google found out. “Finding vulnerabilities in software before it’s released means there’s no chance for attackers to compete: vulnerabilities are fixed before attackers have a chance to exploit them.” wrote Google researchers.
They stressed that the results were experimental and that Big Sleep probably wouldn’t be able to pass a well-targeted automated software testing tool just yet. However, they suggested that their approach could one day have an “asymmetric advantage for defenders”.
BRAIN NUTRITION
Nuclear setbacks. The Financial Times reports that Meta had to abandon plans to build an AI data center near a nuclear power plant somewhere in the US — details remain scarce — because rare bees were discovered at the site.
There is a big push today to power AI data centers with nuclear power because of its 24/7 reliability and because Big Tech needs to square the circle to meet AI’s enormous power demands without breaking its decarbonisation commitments. However, setbacks abound.
In plans that seem similar to Meta, Amazon earlier this year bought a data center that sits alongside the Susquehanna nuclear power plant in Pennsylvania. But regulators on Friday reject the plant owner’s intention to give Amazon all the power it wants from the station’s reactors—up to 960 megawatts, up from the 300 MW already permitted—because doing so could lead to higher prices for other customers and potentially affect grid reliability.