Around noon on Nov. 17, , the chief executive of
Mr. Altman, who had parlayed the success of OpenAI’s ChatGPT chatbot into personal stardom beyond the tech world, had a meeting lined up that day with
Instantly, Mr. Altman knew something was wrong.
Unbeknownst to Mr. Altman, Dr. Sutskever and the three board members had been whispering behind his back for months. They believed Mr. Altman had been dishonest and should no longer lead a company that was driving the A.I. race. On a hush-hush 15-minute video call the previous afternoon, the board members had voted one by one to push Mr. Altman out of OpenAI.
Now they were delivering the news. Shocked that he was being fired from a start-up he had helped found, Mr. Altman widened his eyes and then asked, “How can I help?” The board members urged him to support an interim chief executive. He assured them that he would.
Within hours, Mr. Altman changed his mind and declared war on OpenAI’s board.
His ouster was the culmination of years of simmering tensions at OpenAI that pit those alarmed by A.I.’s power against others who saw the technology as a once-in-a-lifetime profit and prestige bonanza. As divisions deepened, the organization’s leaders sniped and turned on one another. That led to a boardroom brawl that ultimately showed who has the upper hand in A.I.’s future development: Silicon Valley’s tech elite and deep-pocketed corporate interests.
The drama embroiled
Some fought back from Mr. Altman’s $27 million mansion in San Francisco’s
At the center of the storm was Mr. Altman, a 38-year-old multimillionaire. A vegetarian who raises cattle and a tech leader with little engineering training, he is driven by a hunger for power more than by money, a longtime mentor said. And even as Mr. Altman became A.I.’s public face, charming heads of state with predictions of the technology’s positive effects, he privately angered those who believed he ignored its potential dangers.
OpenAI’s chaos has raised new questions about the people and companies behind the A.I. revolution. If the world’s premier A.I. start-up can so easily plunge into crisis over backbiting behavior and slippery ideas of wrongdoing, can it be trusted to advance a technology that may have untold effects on billions of people?
“OpenAI’s aura of invulnerability has been shaken,” said
An Incendiary Mix
From the moment it was created in 2015, OpenAI was primed to combust.
The San Francisco lab was founded by
The board was stacked with people who had competing A.I. philosophies. On one side were those who worried about A.I.’s dangers, like
In 2019, Mr. Altman — who had extensive contacts in Silicon Valley as president of the start-up incubator
“Why is he working on something that won’t make him richer? One answer is that lots of people do that once they have enough money, which Sam probably does,” said
Mr. Altman quickly changed OpenAI’s direction by creating a for-profit subsidiary and raising $1 billion from Microsoft, spurring questions about how that would work with the board’s mission of safe A.I.
Earlier this year, departures shrank OpenAI’s board to six people from nine. Three — Mr. Altman, Dr. Sutskever and
They were united by a concern that A.I. could become more intelligent than humans.
Tensions Mount
After OpenAI introduced ChatGPT last year, the board became jumpier.
As millions of people used the chatbot to write love letters and brainstorm college essays, Mr. Altman embraced the spotlight. He appeared with
Yet as Mr. Altman raised OpenAI’s profile, some board members worried that ChatGPT’s success was antithetical to creating safe A.I., two people familiar with their thinking said.
Their concerns were compounded when they clashed with Mr. Altman in recent months over who should fill the board’s three open seats.
In September, Mr. Altman met investors in the Middle East to discuss an A.I. chip project. The board was concerned that he wasn’t sharing all his plans with it, three people familiar with the matter said.
Dr. Sutskever, 37, who helped pioneer modern A.I., was especially disgruntled. He had become fearful that the technology could wipe out humanity. He also believed that Mr. Altman was bad-mouthing the board to OpenAI executives, two people with knowledge of the situation said. Other employees have also complained to the board about Mr. Altman’s behavior.
In October, Mr. Altman promoted another OpenAI researcher to the same level as Dr. Sutskever, who saw it as a slight. Dr. Sutskever told several board members that he might quit, two people with knowledge of the matter said. The board interpreted the move as an ultimatum to choose between him and Mr. Altman, the people said.
Dr. Sutskever’s lawyer said it was “categorically false” that he had threatened to quit.
Another conflict erupted in October when Ms. Toner published a paper, “Decoding Intentions: Artificial Intelligence and Costly Signals,” at her Georgetown think tank. In it, she and her co-authors praised
Mr. Altman was displeased, especially since the
The paper was merely academic, Ms. Toner said, offering to write an apology to OpenAI’s board. Mr. Altman accepted. He later emailed OpenAI’s executives, telling them that he had reprimanded Ms. Toner.
“I did not feel we’re on the same page on the damage of all this,” he wrote.
Mr. Altman called other board members and said Ms. McCauley wanted Ms. Toner removed from the board, people with knowledge of the conversations said. When board members later asked Ms. McCauley if that was true, she said that was “absolutely false.”
“This significantly differs from Sam’s recollection of these conversations,” an OpenAI spokeswoman said, adding that the company was looking forward to an independent review of what transpired.
Some board members believed that Mr. Altman was trying to pit them against each other. Last month, they decided to act.
Dialing in from