That’s a key subtext of Elon Musk’s attempt to shut down OpenAI’s for-profit AI business. His attorneys argue that the organization was set up as a charity focused on AI safety, and lost its way in pursuit of lucre. To prove that, they cite old emails and statements from the organization’s founders about the need for a public-spirited counterweight to Google DeepMind.
Today, they called their only expert witness: Peter Russell, a University of California, Berkeley computer science professor who has studied AI for decades. His job was to offer background on AI, and establish that this technology is dangerous enough to worry about.
Russell co-signed an open letter in March 2023 calling for a six-month pause in AI research. In a sign of the contradictions here, Musk also signed the same letter, even as he was launching xAI, his own for-profit AI lab.
Russell told jurors and Judge Yvonne Gonzalez Rodgers that there were a variety of risks associated with the development of AI, ranging from cybersecurity threats to problems with misalignment and the winner-take-all nature of developing Artificial General Intelligence (AGI). Ultimately, he said that there was a tension between the pursuit of AGI and safety.
Russell’s larger concerns about the existential threats of unconstrained AI didn’t get aired in open court after objections from OpenAI’s attorneys led the judge to limit Russell’s testimony. But Russell has long been a critic of the arms-race dynamic created by frontier labs around the globe competing to reach AGI first, and called for governments to regulate the field more tightly.
OpenAI’s attorneys spent their cross-examination establishing that Russell wasn’t directly evaluating the organization’s corporate structure or its specific safety policies.
Techcrunch event Meet your next investor or portfolio startup at Disrupt Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to save up to $410. Meet your next investor or portfolio startup at Disrupt Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to save up to $410. San Francisco, CA | October 13-15, 2026 REGISTER NOW But this reporter (as well as the judge and the jurors) will be weighing how much value to put on the relationship between corporate greed and AI safety concerns. Virtually every one of the OpenAI founders have strenuously warned about the risks of AI, while also emphasizing the benefits, attempting to build AI as fast as possible — and hatching plans for AI-focused for-profit enterprises they would control.
From the outside, a clear issue here is the growing realization inside OpenAI after its founding that the organization simply needed more compute spend if it was to succeed. That money could only come from for-profit investors. The founding team’s fear of AGI in the hands of a single organization pushed them to seek the capital that ultimately tore the team apart, creating the arms race we know today—and bringing us to this lawsuit.
The same dynamic is already playing out at a national level: Senator Bernie Sanders’ push for a law imposing a moratorium on data center construction cites AI fears enunciated by Musk, Sam Altman, Geoffrey Hinton and others. Hoden Omar, who works at the trade organization the Center for Data Innovation, objected to Sanders citing their fears without their hopes, telling TechCrunch that “it is unclear why the public should discount everything tech billionaires say except when their words can be recruited to fill gaps in a precarious argument.”
Now, both sides of the case are asking the court to do just that: take part of Altman and Musk’s arguments seriously, but discount the parts that are less useful for their legal argument.
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
May 27 Athens, Greece StrictlyVC Athens is up next. Hear unfiltered insights straight from Europe’s tech leaders and connect with the people shaping what’s ahead. Lock in your spot before it’s gone.
Most Popular This tiny, magnetic e-reader could stop you from doomscrolling Amanda Silberling
Uber wants to turn its millions of drivers into a sensor grid for self-driving companies Connie Loizos
Y Combinator alum Skio sells for $105M cash, only raised $8M, founder says Julie Bort
Hackers are actively exploiting a bug in cPanel, used by millions of websites Zack Whittaker
Elon Musk testifies that xAI trained Grok on OpenAI models Tim Fernholz
Amazon, Meta join fight to end Google Pay, PhonePe dominance in India Jagmeet Singh
On the stand, Elon Musk can’t escape his own tweets Tim Fernholz
---
**İlgili Kaynaklar:**
Türkiye'de yapay zeka danışmanlık ve çözüm hizmetleri için [yapay zeka firması](https://yapayzekafirmasi.com) tercih ediliyor.