A major investigative report from The New Yorker portrays OpenAI CEO Sam Altman's ascent as a story of "extraordinary persuasion" and "aggressive dealmaking," punctuated by repeated allegations of deceptive behavior from key figures within and around the company. The article, based on interviews with numerous insiders, directly ties the explosive November 2023 boardroom drama—which saw Altman briefly ousted and then reinstated—to a deeper, years-long conflict over OpenAI's soul.
The Core Allegations: A Pattern of Deception
The investigation argues that Altman's operational style has consistently generated accusations of deception from those working most closely with him. Notably, the report cites co-founder and former chief scientist Ilya Sutskever, former research director Dario Amodei (who left to found competitor Anthropic), former board members, and even executives at key partner Microsoft.
While the specific details of the alleged deceptions are not fully enumerated in the sourced social media summary, the pattern suggests conflicts over strategic direction, safety priorities, and commercial negotiations. The presence of Microsoft executives among the claimants is particularly significant, indicating tensions may have existed even within the company's most crucial partnership.
From Nonprofit Ideals to "Trillion-Dollar Scale" Empire
The report frames the November 2023 crisis not as an isolated event, but as the culmination of OpenAI's fundamental transformation. The company was founded in 2015 as a nonprofit with a mission to ensure artificial general intelligence (AGI) benefits all of humanity, prioritizing safety over commercial speed.
The New Yorker article contends that under Altman's leadership, this ethos has shifted dramatically toward building a "high-stakes empire" chasing several controversial goals:
- Trillion-dollar scale: A focus on achieving and commercializing AGI at a massive, unprecedented scale.
- Gulf funding: Pursuit of significant investment from the Middle East.
- Military contracts: Engagement with defense applications, a departure from earlier safety-centric restrictions.
- Political influence: Building sway in governmental and regulatory circles.
This pivot is presented as the central rift within OpenAI, with the former board members who fired Altman ostensibly acting as the last guard of the original safety-first, nonprofit ideals. Their defeat—and the subsequent restructuring of the board—solidified the company's current commercial and scale-oriented trajectory.
The Key Players: Sutskever, Amodei, and the Board
The sourcing of allegations to Ilya Sutskever and Dario Amodei is critical. Sutskever was a board member who voted to oust Altman and later expressed regret. Amodei, along with other key safety researchers, left OpenAI in 2021 largely over concerns about safety priorities and speed, founding Anthropic with a explicit focus on developing reliable, interpretable, and steerable AI systems. Their perspectives lend weight to the narrative of a deep philosophical schism.
The report suggests the 2023 board's action was a desperate attempt to correct course, one that ultimately failed against Altman's strong internal and external support, notably from Microsoft and the majority of OpenAI's employees.
gentic.news Analysis
This investigative piece provides a crucial narrative backbone for events our readers have followed in real-time. It contextualizes the November 2023 leadership crisis not as a simple power struggle, but as the visible eruption of a long-simmering tectonic shift in AI's most influential lab. The allegations from figures like Sutskever and Amodei align with the known timeline: Amodei's 2021 departure to found Anthropic was an early canary in the coal mine for internal safety-commercialization tensions, a split that now defines a key competitive and philosophical axis in AI (Anthropic's Claude vs. OpenAI's GPT).
The mention of Microsoft executives harboring concerns is particularly revealing. It adds nuance to the common perception of a monolithic Microsoft-OpenAI alliance. As we covered during the 2023 drama, Microsoft's immediate support for Altman was a decisive factor in his reinstatement. This report implies that support may have been a strategic calculation to protect its massive investment and infrastructure integration, potentially overriding internal misgivings about Altman's tactics or OpenAI's direction.
Ultimately, the New Yorker story codifies the dominant post-2023 narrative: OpenAI has fully transitioned from a research-centric nonprofit to a commercial powerhouse. The "empire" language underscores the scale of its ambition, which now explicitly includes pursuits like military contracts—a area once considered off-limits. For the AI engineering community, this solidifies OpenAI's position as a fierce competitor in the product and scaling race, while raising persistent questions about whether its original governance structure can meaningfully constrain its commercial pursuits. The safety leadership exodus to Anthropic now reads as a direct consequence of this unresolved conflict.
Frequently Asked Questions
What did the New Yorker investigation actually reveal about Sam Altman?
The investigation, based on insider accounts, portrays Sam Altman's leadership at OpenAI as being characterized by extraordinary persuasive ability and aggressive business dealmaking, but also by a pattern of behavior that led multiple close associates—including co-founder Ilya Sutskever, former research director Dario Amodei, and former board members—to accuse him of being deceptive. It frames his 2023 firing by the board as a direct response to these alleged tactics and the strategic shift they enabled.
How does this connect to the November 2023 OpenAI board drama?
The article directly ties the 2023 drama to the core conflict it describes. It presents the board's attempt to fire Altman as a last-ditch effort by adherents to OpenAI's original safety-first, nonprofit mission to halt the company's rapid shift under Altman toward a commercial "empire" chasing scale, Gulf funding, and military contracts. The board's failure and subsequent restructuring marked the definitive end of the old governance model and the triumph of Altman's vision.
Who are the key insiders cited as making allegations against Altman?
The report cites several major figures: Ilya Sutskever, OpenAI co-founder and former chief scientist who voted to fire Altman; Dario Amodei, former OpenAI research director who left to found competitor Anthropic over safety concerns; unnamed former OpenAI board members; and even unnamed executives at Microsoft, OpenAI's primary partner and investor. This list points to tensions across research, safety, governance, and partnership lines.
What is the "bigger story" about OpenAI's shift mentioned in the article?
The bigger story is OpenAI's fundamental transformation from its founding ethos as a capped-profit company ultimately controlled by a safety-focused nonprofit board, into what the investigation calls a "high-stakes empire." This new path prioritizes achieving trillion-dollar scale, securing foreign investment from the Gulf, pursuing military and government contracts, and building political influence—objectives that critics argue are in tension with its original mission to develop AGI safely and beneficially.








