As the world changes, so does the era of artificial intelligence…
Now that the dust has settled a bit, the behind-the-scenes story is starting to piece together.
As we all know, OpenAI CEO Sam Altman was suddenly fired by the board of directors, triggering an implosion of epic proportions that threatened the company’s future, investor rights, and partnerships.
This series of events also highlights the importance of trust, governance and purpose. At least I’m grateful for that.
It is now revealed that there are two factions: 1) Chief Scientist Ilya Sutskever and Board Member Helen Toner, and 2) Sam Altman and Greg Brockman.
At the heart of the problem appears to be a human war. No, I’m serious.
Unique organizational structure to manage OpenAI’s for-profit and non-profit investments
OpenAI was founded in 2015 as a non-profit organization to build artificial intelligence that is safe and benefits humanity. It needs to do this if it is to achieve its sacred mission of building a superintelligent system that rivals the human brain. What began as a private donor-funded enterprise grew into a business need and opportunity.
In 2018, the company created a for-profit subsidiary and raised billions of dollars in funding, including $1 billion from Microsoft. The new subsidiary will be controlled by a non-profit board of directors whose mandate is “human beings, not OpenAI investors.”
Some board members said that with ChatGPT’s unprecedented popularity, the scale of humans and OpenAI was out of balance.
In particular, a rift developed between Altman and Helen Toner, a (former) board member and director of strategic and basic research grants at Georgetown’s Center for Security and Emerging Technologies (CSET).
Helen Toner’s early commitment to OpenAI of $30 million in partnership with Open Philanthropy may have helped her win a board seat.
Open your “A” eyes and look at each other
It is later discovered that Tona and Ultraman no longer see eye to eye (open A).
They reportedly met a few weeks ago to discuss Paper The article she co-authored appeared to criticize OpenAI while praising the company’s main competitor, Anthropic. Anthropic was founded by a senior open artificial intelligence scientist and researcher who left after a series of disagreements with Altman. They asked the board to oust Altman in 2021, but failed and left the company. These past events would play a role in later developments…
Altman complained to Toner that the paper criticized OpenAI’s safety and ethics practices. His view was that her words were dangerous for the company and its investors.
Altman later sent an email saying they “didn’t agree on the damage this was causing.” He emphasized that “any criticism from board members carries a lot of weight.”
And, he’s right. Indeed. That’s why it’s important to have an organizational and board structure that aligns with the company’s purpose, mission, and strategy.
So what did Toner say?
Here are excerpts from her report:
“Anthropic’s decision represents an alternative strategy to reduce the ‘race to the bottom’ dynamic in AI security. The GPT-4 system card is a sign of OpenAI’s emphasis on the costly nature of building secure systems, and Anthropic’s decision to take its product out of the market is costly. A sign of restraint. By delaying the release of Claude until another company comes out with a product with similar capabilities, Anthropic is signaling that it’s willing to avoid the kind of crazy corner-cutting that the release of ChatGPT seems to have prompted.”
Altman discussed with chief scientist Ilya Sutskever whether Toner should be removed from the board. Instead, Suzkweil sided with Toner. The events leading up to Anthropic’s creation likely contributed to his rationale.
It was not Tona who was expelled, but Ultraman.
It is now well known that Ultraman’s sudden firing appears more reckless rather than strategic and methodical. Microsoft CEO Satya Nadella, arguably OpenAI’s most valuable partner and investor, was notified just a minute before the announcement.
Hours later, the board met with employees who stressed that their decision put the company in serious danger.
But the board remained unconvinced. Toner reminded employees that its mission is to create artificial intelligence that “benefits all mankind.”and based on New York TimesToner went a step further, saying that if the company was destroyed, “it might be consistent with its mission.”
Record scratches. Photo of Sam Altman looking shocked. The voiceover says: “Yes, that’s me. You may be wondering how I got here.”
This is a coup in two directions.
Some believe Ultraman is moving too quickly, not playing the game by rules that benefit humanity, and not listening to people expressing concerns or contrary ideas.
It wasn’t until co-founder Greg Brockman resigned, nearly every employee threatened to quit, and Microsoft offered everyone a position in its new artificial intelligence research division that Sutskover realized to the losses suffered by a company he cared deeply about. them.
Will he tweet later, or is it Xeet now? This is another conversation we need to have. “I deeply regret my involvement in the board’s actions,” he Admitted. “I never wanted to hurt OpenAI,” he continued.
But he did damage the company. This may be done at the pleasure of the board of directors.
Before the saga ended, the board named Emmett Shear interim CEO. One of his first tasks was to find evidence to support Altman’s firing, and he threatened to resign if he didn’t receive it. Narrator: “He never received the evidence.” Still, it deserves credit for helping lay the groundwork for a reunion. Not bad for a three-day stay.
But there’s more.Reuters report Several researchers wrote to the board of directors warning that a powerful artificial intelligence discovery code-named Q* (Q-Star) could threaten humanity.
On November 16, Altman publicly stated that OpenAI had recently achieved a huge breakthrough and promoted the “frontier of discovery.” To add to the rolling thunder of the initial boom, this was the fourth such breakthrough in its eight-year history, he added.
As we all know, Altman returned to OpenAI as CEO, but currently does not have a board seat. Brockman also returns, but like Ultraman, does not have a board seat. The board of directors has also been restructured, with Bret Taylor serving as chairman, Larry Summers joining, and Adam D’Angelo remaining on the original board. edge report The board of directors is currently seeking to expand to nine people to reset OpenAI’s governance.
The damage has been done. But the silver lining is that while the board failed, it did effectively and expensively reveal the huge need for AI ethics, safety and governance.
Now the real work begins.
Trust must be earned back, not just for the company, but for the AI industry and movement as a whole. Existential threats must not yield to unfettered capitalism or short-termism. Humanity needs its benefactors and protectors.
Every new feature and breakthrough requires careful analysis, outside voices, philosophical debate, and a board of directors that can balance innovation between ethics and safety.
There is a lot to sort through and analyze. If anything, the importance of governance, trust and purpose come together to represent the heart of the matter.
It’s a time to learn from your mistakes and move forward, opening the door to a variety of thoughtful perspectives, balancing progress with humanity, and communicating transparently about what’s the right thing to do and what’s the wrong thing to do.
Never forget, every successful company knows it’s okay”without its people.“
Happy Thanksgiving to everyone!
The New York Times, Cade Mays, Tripp Meeker, Mike Isaacs
Bloomberg, especially Emily Chang, Katie Roof, Ed Ludlow
Information, Jessica Lessing, Emile Everati
Edge, Nile Pate, Alex Heath
Siqi Chen (@blader)