Sam Altman and OpenAI: Distilled Summary
Source: The New Yorker | Authors: Ronan Farrow & Andrew Marantz | Date: April 6, 2026
OpenAI was founded in 2015 as a nonprofit with a legally binding duty to prioritize humanity's safety over commercial success. Its CEO was supposed to be a person of uncommon integrity. Based on over a hundred interviews and previously undisclosed documents — including secret memos compiled by chief scientist Ilya Sutskever and two hundred pages of notes by co-founder Dario Amodei — a different picture of Sam Altman emerges.
The core allegation: Altman systematically misrepresented facts to board members, concealed safety lapses, and made contradictory promises to different factions. Sutskever's memos begin with a list headed by the word "Lying." When the board fired Altman in November 2023, he mobilized investors, employees, and crisis-communications operatives to reverse the decision in under five days. The departing board members demanded an independent investigation, but no written report was ever produced — only oral briefings to board members selected after consultation with Altman.
Safety commitments have been systematically abandoned. A pledged twenty percent of computing power for a "superalignment" team was actually one to two percent, often on outdated hardware. The team was dissolved. Board members discovered that Altman told them safety panels had approved GPT-4 features when they had not. OpenAI's charter, including the radical "merge and assist" clause, has been diluted to meaninglessness. The Future of Life Institute now gives OpenAI an F on existential safety.
Altman is concentrating A.I. infrastructure in Gulf autocracies. Through the Stargate venture and related plans, OpenAI is building a data-center campus in Abu Dhabi seven times larger than Central Park. National-security officials have warned that the U.A.E.'s dependence on Huawei infrastructure and history of leaking U.S. technology to China make this dangerous. Iran has already bombed American data centers in the region.
Political allegiances have shifted with commercial interests. Altman called Trump "an unprecedented threat" in 2016, then donated to his inaugural fund, helped repeal Biden's A.I. safety executive order, and moved to replace Anthropic's Pentagon contracts after Anthropic refused to enable autonomous weapons. At a staff meeting, Altman told concerned employees, "You don't get to weigh in on that."
The pattern predates OpenAI. At his first startup Loopt, senior employees twice asked the board to fire him. At Y Combinator, partners complained about divided loyalties; Paul Graham privately told colleagues Altman "had been lying to us all the time." Multiple senior Microsoft executives describe a relationship that has become fraught, with one saying there is "a small but real chance he's eventually remembered as a Bernie Madoff- or Sam Bankman-Fried-level scammer."
OpenAI is reportedly preparing for an I.P.O. at a potential trillion-dollar valuation. The company that was founded to prevent an "AGI dictatorship" is now led by someone whose most consistent trait, according to the majority of sources in this investigation, is an unwillingness to be constrained — by boards, charters, safety protocols, or the truth.