Sam Altman and the Question of Trust at OpenAI
SUMMARY
Source: The New Yorker | Authors: Ronan Farrow and Andrew Marantz | Date: April 6, 2026
The Ilya Memos and the Firing
In fall 2023, OpenAI chief scientist Ilya Sutskever compiled roughly seventy pages of Slack messages, H.R. documents, and explanatory text into secret memos sent as disappearing messages to fellow board members. The memos alleged that CEO Sam Altman misrepresented facts to executives and board members and deceived them about internal safety protocols. The first item on a list of Altman's patterns was "Lying." Sutskever told a board member, "I don't think Sam is the guy who should have his finger on the button."
Board members Helen Toner and Tasha McCauley received the memos as confirmation of existing concerns. In November 2023, Sutskever invited Altman to a video call and read a statement informing him he was no longer employed. The board's public statement said only that Altman "was not consistently candid in his communications." Microsoft, which had invested approximately thirteen billion dollars, learned of the firing moments before it happened. CEO Satya Nadella was stunned. Investor Reid Hoffman found no evidence of embezzlement or harassment — just an opaque process.
The Counteroffensive
Altman returned to his San Francisco mansion and established what he called a "government-in-exile." Ron Conway, Brian Chesky, and crisis-communications manager Chris Lehane joined by video and phone. Lehane framed the firing as a coup by rogue effective altruists and urged an aggressive social-media campaign.
Thrive Capital put its planned investment on hold, signaling the deal would proceed only if Altman returned. Microsoft announced it would create a competing initiative for Altman and departing employees. A public letter demanding Altman's reinstatement circulated internally, and a majority of OpenAI employees ultimately threatened to leave with him. Altman demanded the resignations of board members who had moved to fire him.
Within five days, Altman was reinstated. Sutskever, Toner, and McCauley lost their board seats. The departing members demanded an independent investigation of the allegations. The two new board members, Lawrence Summers and Bret Taylor, were selected after close conversations with Altman.
Altman's Background and Pattern
Altman grew up in an affluent suburb of St. Louis, attended Stanford, and dropped out to join Y Combinator's inaugural batch. His first startup, Loopt, reflected his drive but also a tendency colleagues noticed to exaggerate. A senior employee compared the blurring between aspiration and claimed accomplishment to the dynamics behind Theranos. Groups of senior employees asked Loopt's board twice to fire Altman. Loopt was eventually acquired in a deal partly arranged to help Altman save face.
Paul Graham recruited Altman as Y.C. president in 2014. Altman oversaw aggressive expansion but drew complaints from Y.C. partners about divided loyalties and personal investing in Y.C. companies. By 2018, partners approached Graham to complain. Altman's departure from Y.C. was contested in its characterization — Altman maintains he was never fired, but Graham privately told colleagues that "Sam had been lying to us all the time."
OpenAI's Founding and Structure
In 2015, Altman and Elon Musk co-founded OpenAI as a nonprofit whose board had a duty to prioritize the safety of humanity over the company's success. The founders asserted that A.I. could be the most powerful and dangerous invention in human history. Altman proposed a "Manhattan Project for AI" with safety as a first-class requirement.
Unlike a government initiative, OpenAI would be privately funded. The firm accepted charitable grants and programmers took significant pay cuts. Musk provided some office space, and the pitch to employees was "You're going to save the world."
Musk grew impatient by 2017 and demanded majority control during for-profit conversion discussions. Sutskever objected, warning that concentrating control would risk an "AGI dictatorship." Musk quit acrimoniously in early 2018 and later founded xAI and sued Altman for fraud.
Safety Commitments and Their Erosion
Dario Amodei, who led OpenAI's safety team, compiled over two hundred pages of notes documenting concerns about the founders' motives. He helped draft OpenAI's charter, including a radical "merge and assist" clause obligating OpenAI to cede resources to any competitor closer to building a safe A.G.I. During the 2019 Microsoft deal, Amodei discovered a provision granting Microsoft the power to block mergers — effectively nullifying the merge-and-assist commitment. When he confronted Altman, Altman denied the provision existed until Amodei read it aloud.
In 2020, Amodei, his sister Daniela, and other colleagues left to found Anthropic. An official "superalignment team" was announced with a pledge of twenty percent of OpenAI's computing power, but the actual allocation was between one and two percent, often on the oldest hardware. The team was dissolved without completing its mission.
Board members discovered that Altman had assured them safety panels had approved GPT-4 features when they had not. Microsoft had released an early ChatGPT version in India without completing a required safety review, which Altman had not disclosed. Researcher Jan Leike wrote to the board that OpenAI was "prioritizing the product and revenue above all else."
The Investigation That Wasn't
The departing board members made an independent investigation a condition of their exit. OpenAI hired WilmerHale. Six people close to the inquiry alleged it was designed to limit transparency. Investigators initially did not contact key figures. No written report was produced — only oral briefings to Summers and Taylor. The investigation appeared to focus on hunting for clear criminality rather than centering the integrity questions behind the firing. Altman was cleared and rejoined the board. Some board members indicated a potential "need for another investigation."
Geopolitical Ambitions and the Countries Plan
Altman pursued increasingly sweeping geopolitical plans. In meetings with U.S. intelligence officials in 2017, he claimed China had launched an "A.G.I. Manhattan Project" and needed billions in government funding. An intelligence official who investigated found no evidence the project existed.
OpenAI executives developed what became known internally as the "countries plan" — a scheme to play world powers against one another in a bidding war for A.I. technology, potentially including China and Russia. Policy adviser Page Hedley was aghast. The plan was abandoned after employees threatened to quit, but Altman pursued variations, including a 2018 event at the Hotel Bel-Air with billionaires and a pitch for a global cryptocurrency "redeemable for the attention of the AGI."
Altman cultivated relationships with Gulf autocracies. He met Saudi Crown Prince Mohammed bin Salman in 2016 and later joined the advisory board for Neom, a Saudi megaproject, even after the assassination of journalist Jamal Khashoggi. He departed the Neom board under pressure but continued seeking Saudi funding.
The larger plan, ChipCo, envisioned Gulf states funding the construction of massive chip foundries and data centers, some located in the Middle East. National-security officials worried about concentrating advanced A.I. infrastructure in Gulf autocracies with ties to Chinese technology. The Biden Administration withheld approval.
The Trump Era and Stargate
After Trump's inauguration, Altman announced Stargate, a five-hundred-billion-dollar joint venture to build A.I. infrastructure. The Trump Administration rescinded Biden's export restrictions on A.I. technology. Altman and Trump traveled to the Saudi royal court. Stargate announced plans to expand into the U.A.E. with a data-center campus seven times larger than Central Park.
Altman had donated a million dollars to Trump's inaugural fund after previously calling Trump "an unprecedented threat to America." He became one of Trump's favored tycoons, accompanying him to Windsor Castle.
Anthropic, the Pentagon, and Military A.I.
When Defense Secretary Pete Hegseth demanded Anthropic abandon its prohibitions on autonomous weapons and domestic mass surveillance, Amodei declined. Hegseth designated Anthropic a "supply-chain risk." OpenAI moved quickly to fill the gap. Although Altman publicly claimed OpenAI shared Anthropic's ethical boundaries, he had been negotiating with the Pentagon for at least two days. OpenAI announced military adoption of its models the same night.
Many ChatGPT users deleted the app. At least two senior employees departed. At a staff meeting, Altman told employees who raised concerns, "You don't get to weigh in on that."
The Current Landscape
OpenAI is reportedly preparing for an I.P.O. at a potential trillion-dollar valuation. The company has closed many safety-focused teams. The Future of Life Institute gave OpenAI an F on existential safety. When asked to interview researchers working on existential safety, an OpenAI representative replied, "That's not, like, a thing."
Altman's allies dismiss critics as naïve or as "doomers." His detractors — including former board members, former executives, and senior Microsoft personnel — describe a pattern of deception extending across his career. One Microsoft executive said there is "a small but real chance he's eventually remembered as a Bernie Madoff- or Sam Bankman-Fried-level scammer." Mira Murati, OpenAI's former CTO, said, "We need institutions worthy of the power they wield."
Altman attributes shifting commitments to adapting to rapidly changing circumstances. He insists he continues to prioritize safety, though when pressed for specifics he says, "We still will run safety projects, or at least safety-adjacent projects." He describes himself as having little personal ambition: "I weirdly have very little personal ambition." But a former employee recalls him saying, "I don't care about money. I care more about power."