“I don’t think Sam is the person who should have his finger on the button.”
An extensive report of The New Yorker has once again brought to the fore a question that OpenAI has never completely managed to resolve: whether Sam Altman is a reliable figure to lead a technology that the industry itself has been describing for years as potentially transformative and dangerous. The piece, signed by Ronan Farrow and Andrew Marantz, reconstructs with interviews and internal documents the doubts that led part of the OpenAI leadership to try to remove him in 2023, and does so with a central idea that is very difficult to ignore: the problem, for several of his former allies, was not only one of management, but of personal trust.
The most powerful phrase attributed to Ilya Sutskever sums up the tone of the report quite well: “I don’t think Sam is the person who should have his finger on the button.”. According to The New Yorker, that fear crystallized in internal memos accusing Altman of displaying a pattern of deceptions, omissions and contradictory accounts to executives and board members. The publication maintains that these documents, never revealed until now, reinforced the conviction of several executives that the CEO of OpenAI was not the right person to supervise a technology with that level of potential impact.
The report does not present great isolated conclusive evidence, but rather something perhaps more uncomfortable. Talk about an accumulation of episodes which, according to their authors, outline a leadership style based on telling each interlocutor what they need to hear, delaying conflicts and circumventing the structures that in theory should limit their power. This is where another of the phrases cited in the piece fits in, attributed to Dario Amodei, today at the head of Anthropic: “OpenAI’s problem is Sam himself.”
Altman’s fall and return no longer seems like a closed episode
Until now, Altman’s dismissal and rapid reinstatement in 2023 had been settling into the public narrative as some kind of poorly resolved mutiny, a governance crisis amplified by investors, Microsoft and employee pressure. But The New Yorker gives it a much more disturbing read. It does not speak of a simple clash of egos or a rebellion of the most alarmist, but of an internal struggle around a very primary issue. Was the head of OpenAI telling the truth when he talked about security, internal commitments and control processes?
The consequence of that episode is now history. Altman returned, his critics lost weight, and OpenAI continued to grow to become one of the most influential companies in the sector. But that is precisely the strength of the report. The more power Altman accumulates, That initial doubt is more relevant. The question is no longer just why they tried to kick him out, but whether those who did it were seeing something that the rest of the industry preferred to let pass because the business was too big to stop.
The problem no longer affects just OpenAI, but the entire industry
The piece also leaves a conclusion that goes beyond the character. Although the focus is on Altman, what emerges is an industry that claims to take security very seriously while rewarding just the opposite: speed, growth, contracts, infrastructure and political power. In that context, the fact that a part of the ecosystem does not trust Altman does not prevent it from continuing to advance. On the contrary, it seems almost compatible with the current incentive system.
That explains why the report is so uncomfortable. He not only questions Sam Altman, but also those who have accepted that such a figure concentrates more and more influence. The New Yorker does not claim that all accusations are closed as proven factsbut it does make it clear that doubts about his integrity never completely disappeared. And that, when it comes to the person who runs one of the most powerful companies in the world in AI, is not exactly a trivial issue.
