Mira Murati's Bombshell Testimony: Altman Lied About AI Safety, Created Chaos
In a dramatic turn in the ongoing Musk v. Altman trial, former OpenAI Chief Technology Officer Mira Murati testified under oath that CEO Sam Altman deliberately misled her about the safety clearance of a new artificial intelligence model. The video deposition, played in an Oakland, California federal courtroom on May 6, 2026, provided the most damning insider account yet of what Murati described as a pattern of deception and managerial chaos at one of the world's most influential AI companies.
Murati, who served as OpenAI's CTO for years and briefly became CEO after Altman's temporary ousting in November 2023, stated that Altman falsely told her the company's legal department had determined the new model did not require review by OpenAI's deployment safety board. When asked directly whether Altman was telling the truth, her answer was unequivocal: "No."
“My concern was about Sam saying one thing to one person and completely the opposite to another person,” Murati said in her recorded testimony, according to Reuters. She added that Altman was "creating chaos" and was deceptive with her and other top executives, pitting them against one another and undermining her authority as technology chief.
Murati explained that after her conversation with Altman, she checked with Jason Kwon, then OpenAI's general counsel and now chief strategy officer. She found a clear "misalignment" between what Altman and Kwon had told her. To err on the side of safety, she ensured the model went through the deployment safety board anyway, bypassing what she considered a misleading directive from the CEO.
Her criticism of Altman, she stressed, was "completely management related." She described having "an incredibly hard job to do in an organization that was very complex" and said she asked Altman to "lead, and lead with clarity, and not undermine my ability to do my job." The testimony painted a picture of a CEO who, according to multiple witnesses, routinely created friction and distrust among his leadership team.
A Pattern of Alleged Deception
Murati's testimony did not come out of nowhere. It echoed earlier accusations from other former OpenAI insiders. Co-founder Ilya Sutskever, in a 52-page memo to OpenAI's board that was read during deposition, wrote that Altman "exhibits a consistent pattern of lying, undermining his execs, and pitting his execs against one another." Former board member Helen Toner, in a 2024 podcast, also shared evidence of Altman "lying and being manipulative in different situations."
Murati agreed with those characterizations. She acknowledged that Altman had pitted executives against each other and undermined her role, making it difficult to maintain coherence and trust within the leadership team. Her testimony lends substantial weight to Musk's central allegation that OpenAI deviated from its nonprofit mission through a culture of dishonesty and self-dealing.
The High-Stakes Trial: Musk vs. Altman Over OpenAI's Future
The trial, filed by Elon Musk against OpenAI and its CEO Sam Altman in 2024, is a landmark legal battle that could reshape the landscape of artificial intelligence development. Musk, a co-founder of OpenAI who left the board in 2018, alleges that the company improperly converted from a nonprofit dedicated to safe AGI development into a for-profit entity that prioritizes commercial gain over its original charitable goals.
Musk is seeking $150 billion in damages to be paid by OpenAI and investor Microsoft, with the funds directed to OpenAI's charitable arm. The case is being heard by a jury in federal court in Oakland, and the stakes could not be higher: if Musk prevails, OpenAI could be forced to restructure, potentially halting or slowing its rapid commercialization of powerful AI models like GPT-4 and its successors.
Murati, who has since left OpenAI to co-found her own AI startup, was a pivotal witness. Despite her criticism of Altman, she also testified that she wanted him to remain CEO during the chaotic boardroom drama of November 2023, when the board briefly fired Altman only to reinstate him days later. She said she pressed board members for a fuller justification for his ousting because "OpenAI was at catastrophic risk of falling apart" and she was "concerned about the company completely blowing up."
The Brockman Diaries Fuel Musk's Case
Earlier in the trial, jurors were shown excerpts from the personal diary of Greg Brockman, OpenAI's co-founder and former president. The entries, dating back to 2017, appear to reveal a deliberate strategy to deceive Musk about the company's intentions. One entry reads: "The true answer is that we want [Musk] out… if three months later we’re doing b-corp then it was a lie." Another states: "Can’t see us turning this [into a nonprofit]."
Legal analysts have noted that these diary entries, combined with Murati's testimony, create a narrative of systematic dishonesty at the highest levels of OpenAI. Gary Marcus, a prominent AI researcher and commentator, wrote on his Substack that Brockman "basically vindicated Musk’s critique, beat by beat" by explaining how he deceived Musk about the nonprofit commitment "without a trace of remorse."
However, the trial is far from over. More high-profile witnesses are expected, including Microsoft CEO Satya Nadella, Altman himself, and former board member Shivon Zilis, who is also the mother of four of Musk's children. OpenAI's legal team is expected to focus on Musk's own character flaws and his competitive motives, given that Musk's startup xAI is now a direct rival to OpenAI.
Why This Matters: AI Safety and Governance Under Scrutiny
The allegations against Altman go beyond personal animosity or corporate infighting. They strike at the heart of the most critical issue in AI development: safety. OpenAI has long positioned itself as the responsible steward of artificial general intelligence (AGI), with a mission to ensure that powerful AI systems are developed safely and benefit all of humanity. The company's governance structure, including its deployment safety board, is designed to provide checks and balances before potentially dangerous models are released.
Murati's testimony suggests that Altman was willing to bypass these safety protocols, at least verbally, to accelerate deployment. This raises troubling questions about whether internal safety reviews were being undermined by commercial pressures. If a CEO can tell his CTO that the legal department cleared an AI model for release without board review — a claim Murati found to be false — then the entire safety apparatus of the company may be compromised.
The issue is particularly salient given the rapid release of ChatGPT in late 2022, which caught the world by surprise and sparked a global race to deploy generative AI. Another former official, Shivon Zilis, hinted at the turmoil surrounding that launch, suggesting that internal disagreements were intense.
Impact on Trust and Regulation
The trial's revelations are likely to have ripple effects beyond the courtroom. For regulators in the United States, Europe, and elsewhere, the testimony provides ammunition for those arguing that self-regulation by AI companies is insufficient. If internal safety boards can be bypassed by executive fiat, then external oversight may be necessary.
For investors and partners, including Microsoft, the testimony raises governance red flags. If Altman cannot be trusted by his own executives, how can he be trusted by partners and regulators? The trial could accelerate demands for independent oversight of AI development, similar to the oversight boards seen in nuclear energy or pharmaceutical development.
Industry practitioners are also watching closely. Disagreements about whether a model needs formal safety review are common flashpoints in large AI organizations, as products move from research prototypes to commercial deployment. Murati's testimony underscores the importance of clear internal review thresholds, incident-response playbooks, and cross-functional signoff processes that cannot be overridden unilaterally.
Broader Implications: What This Changes
If Musk wins, the consequences could be seismic. OpenAI could be forced to return to its nonprofit roots or pay massive damages, potentially derailing its commercial ambitions and giving Musk's xAI a significant advantage. A loss for Altman would also damage his reputation and that of OpenAI, making partnership deals and talent acquisition more difficult.
But even if Musk loses, the trial has already changed the public narrative. The image of OpenAI as a harmonious, mission-driven nonprofit has been shattered. In its place is a portrait of a company riven by distrust, where the CEO is accused of lying to his own executives and where safety protocols are reportedly treated as optional.
For the broader AI industry, the trial serves as a cautionary tale. The tension between safety and speed, between mission and profit, is not unique to OpenAI. Every major AI company faces similar pressures. The difference is that OpenAI was founded on a promise to prioritize safety above all else. Murati's testimony suggests that promise may have been hollow all along.
As the trial continues, with Altman expected to take the stand in the coming days, more revelations are likely. The jury — though its role is advisory only on liability, not damages — will ultimately help shape the narrative that defines the next era of AI governance.
What to Watch Next
Observers should monitor several key developments: Altman's cross-examination, which will be intensely scrutinized; any further internal documents that might be introduced; and the testimony of Microsoft CEO Satya Nadella, who could shed light on the relationship between the two companies. The outcome of this trial may well determine whether AI development is governed by internal accountability, external regulation, or a chaotic mix of both.
Mira Murati's voice has been heard. The question now is whether it will change the course of AI history.
Comments