Big Tech is pouring hundreds of billions into AI. Should it also get to decide if the technology is ‘safe’? 

Google, Microsoft, and Meta’s earnings reports last week put a spotlight on the hundreds of billions of dollars Big Tech will pour into AI by the end of 2024. 

In this quarter alone, Google said its capital expenditures were $12 billion, nearly double the amount from a year earlier, driven by massive investment in AI infrastructure including servers and data centers. Meanwhile, Microsoft is reportedly increasing its spending faster than its revenue, but it still doesn’t have enough data center infrastructure to deploy and run its AI models. And Meta’s investors did not react well to the news that the company would spend billions more than investors had expected on AI investments—which CEO Mark Zuckerberg insisted would yield rewards further down the line. 

Oh, and let’s not forget Amazon, which just invested billions in AI startup Anthropic and plans to spend $150 billion on AI data centers. And many deep-pocketed startups like OpenAI, Anthropic, and all of Elon Musk’s companies are also pouring money into the race (Musk recently posted on X that any company that isn’t spending $10 billion on AI this year won’t be able to compete). 

But Big Tech’s outsized AI spending habits put an interesting spin on another piece of AI news from last week. The U.S. Department of Homeland Security announced the Artificial Intelligence Safety and Security Board, which will advise it on protecting critical infrastructure—from power grids and internet service to airports—from potential AI threats. The 22-member board, required by President Joe Biden’s AI Executive Order announced in November 2023, is heavy with CEOs from the same deep-pocketed companies and startups powering today’s AI boom: Google’s Sundar Pichai; Microsoft’s Satya Nadella; Nvidia’s Jensen Huang; OpenAI’s Sam Altman; Dario Amodei from Anthropic; and the CEOs of Amazon Web Services, AMD, IBM, and Adobe. 

There were immediate criticisms of the board’s makeup, which notably does not include any significant open-source AI representation—that is, companies whose AI models are freely available (either fully or partly, depending on the license) so anyone can modify, personalize, and distribute them without restrictions. Interestingly, those absent include two companies with the deepest pockets: Meta, whose Llama family of models is released partly open (“We were snubbed,” posted Meta’s chief AI scientist Yann LeCun on X) and Musk’s xAI, whose Grok-1 model was released with an open-source license; Musk is also suing OpenAI for its lack of open-source models. But open-source advocates such as Hugging Face and Databricks are also missing. 

In an era in which the power to shape AI may ultimately be concentrated in the hands of the wealthiest tech companies, the question is: Who gets to decide whether and what kinds of AI systems are safe and secure? Can (and should) these companies be a part of regulating an industry where they have clear vested interests? 

Some, like AI researcher Timnit Gebru, say no: “Foxes guarding the hen house is an understatement,” she posted on X. But Alejandro Mayorkas, the Secretary of Homeland Security, told the Wall Street Journal that he was unconcerned that the board’s membership included many Big Tech execs. “They understand the mission of this board,” he said. “This is not a mission that is about business development.” 

Of course, a board dedicated to deploying AI within America’s critical infrastructure does need input from the companies that will be deploying it—which obviously includes hyperscalers like Google and Microsoft, as well as AI model leaders like OpenAI. But the debate between those who believe that Big Tech wants to snuff out AI competition and those who think AI regulation should limit open-source AI is also not new: It has been hot and heavy ever since OpenAI’sAltman testified before Congress in June 2023, urging AI regulation—which my colleague Jeremy Kahn wisely said would be “definitely good for OpenAI” while others called his lobbying “a masterclass in wooing policy makers.” 

In November 2023, the Washington Post reported that a growing group of venture capitalists, CEOs of mid-sized software companies, and open-source proponents are pushing back. They argue the biggest AI players simply want to lock in their advantages with rules and regulations like Biden’s executive order, which lays out a plan for government testing and approval guidelines for AI models. 

And if the U.K. is any example, the group has valid concerns about BigTech’s willingness to be transparent. Politico reported yesterday that while last year Altman and Musk agreed to share their companies’ AI models with the British government, as part of Prime Minister Rishi Sunak’s new AI Safety Institute, they so far have not. For example, the report claimed that neither OpenAI nor Meta has given the U.K.’s AI Safety Institute access to do pre-release testing—showing the limits of voluntary commitments. 

However, with Congressional AI regulations showing few signs of progress, many leaders consider any move towards tackling the “responsible development and deployment of AI” to be a step in the right direction. On the other hand, open-source AI is not going anywhere—so it seems clear that its leaders and proponents will ultimately have to be part of the plan. 

With that, here’s the AI news.

Sharon Goldman
sharon.goldman@fortune.com

AI IN THE NEWS

As more publishers strike licensing deals with OpenAI, eight more U.S. newspapers sue for copyright infringement. Just one day after the Financial Times announced it had struck a deal for OpenAI to license its content for AI training, Axios has reported that eight newspapers have sued OpenAI for copyright infringement for scraping their articles for training. This is particularly notable because until now, the New York Times was the only major publisher to take similar legal action, filing a lawsuit against OpenAI in December. 

Another Paris-based startup founded by ex-Google DeepMind researchers reportedly raises a mega round. Business Insider reported that Holistic, a Paris-based AI startup focused on building a new multi-agent artificial general intelligence (AGI), just snagged a mega round of around $200 million in new funding. The excitement is certainly due to the fact that one of the founders is former Google DeepMind scientist Karl Tuyls, while another DeepMind alum, Laurent Sifre, is chief LLM officer. Some might call it déjà vu, as fellow French AI startup Mistral, which was also founded by ex-DeepMind AI researchers, has raised several massive rounds over the past year. 

ChatGPT will remember you—if you want it to. Are you frustrated that you always have to remind ChatGPT about what you said? You’re in luck: OpenAI announced that ChatGPT’s Memory feature for paid subscribers, which it announced in February, is now generally available to all ChatGPT Plus users. You can tell ChatGPT to remember certain details of conversations and you can also give it permission to learn from chats.

The U.S. and China have agreed to dialogue on AI about risks and safety concerns. The New York Times reported that the U.S. and China will hold their first high-level talks on AI within the “coming weeks,” according to U.S. Secretary of State Anthony Blinken, to discuss AI risks and safety concerns. The announcement comes at a tense time in U.S. and China relations, including related to a potential U.S. ban on the hugely popular Chinese-owned app TikTok. 

Darktrace lights up private equity with a £4.3 billion deal. U.S. private equity firm Thoma Bravo agreed to acquire Darktrace for about $5.3 billion, according to the Financial Times. The U.K.-based company provides AI cybersecurity services designed to protect companies against the threat of cloud attacks. 

FORTUNE ON AI

Elon Musk says any company that isn’t spending $10 billion on AI this year like Tesla won’t be able to compete by Christiaan Hetzner

Top tech CFO says AI is no ‘blip or hype,’ it’s tech’s historic moment—and his numbers back that up —by Will Daniel

Meta’s investors are worried about the billions it’s spending on AI—but its advertising empire makes it a positive, Deutsche Bank says —by Dylan Sloan

Satya Nadella says Microsoft’s AI payoff hinges on other companies doing ‘the hard work’ of changing their cultures —by Rachyl Jones

Meet Cohere, Canada’s AI ‘underdog’ that could soon be worth $5 billion by doing the opposite of OpenAI’s every move —by Sharon Goldman

AI CALENDAR

May 7-11: International Conference on Learning Representations (ICLR) in Vienna

May 21-23: Microsoft Build in Seattle

June 5: FedScoop’s FedTalks 2024 in Washington, D.C.

June 25-27: 2024 IEEE Conference on Artificial Intelligence in Singapore

July 15-17: Fortune Brainstorm Tech in Park City, Utah (register here)

July 30-31: Fortune Brainstorm AI Singapore (register here)

Aug. 12-14: Ai4 2024 in Las Vegas

BRAIN FOOD

Meta’s AI assistant begs the question: How much is too much AI? 

Ever since Meta released the latest version of its Meta AI assistant two weeks ago, which integrated AI-generated search into WhatsApp, Instagram, Facebook, and Messenger, many have asked one important question: “How do I get rid of this?” It seems that for some of Meta’s users, combined with the amount of genAI content flooding the company’s apps, it’s all just too much AI. An article in Fast Company declared “AI is making Meta’s apps basically unusable.” A long Reddit thread discusses how to disable Meta AI’s search bar. And ZDNet reported that “sorry, you can’t disable Facebook’s Meta AI tool, but here’s what you can do.” Clearly, there may be an argument for “less is more” when it comes to AI—or perhaps Meta and other companies developing AI tools have to work out what AI features its users really need…and want. 

Source Link

LEAVE A REPLY

Please enter your comment!
Please enter your name here