Skip to content
Right Back Where We Started From
Go to my account

Right Back Where We Started From

A wild few days at OpenAI exposed fault lines in the world of tech.

Happy Wednesday! The White House’s National Christmas Tree toppled over on the Ellipse yesterday, and the National Park Service is blaming strong winds—the same gusts responsible for lifting a 15-foot-tall George Santos balloon into both the air and our nightmares.

Quick Hits: Today’s Top Stories

  • In the fifth such exchange since a temporary ceasefire was instituted late last week, Hamas freed 12 more hostages on Tuesday—10 Israelis and two Thai nationals—and Israel released 30 Palestinian prisoners in return. Fighting between Hamas and Israel Defense Forces (IDF) briefly flared in northern Gaza on Tuesday afternoon, however, with the IDF saying Hamas fighters had targeted Israeli troops with explosives and gunfire and that Israeli forces responded but stayed within the bounds of the truce. A Hamas spokesperson claimed it was the IDF that first violated the ceasefire, but the skirmish has not yet threatened plans for an additional exchange today. The 48-hour extension to the ceasefire expires after today, though U.S. and Israeli intelligence officials met in Qatar yesterday to discuss plans for another potential extension. In a tweet posted Tuesday night, President Joe Biden seemed to waver in his support for Israel’s continued offensive into Gaza. “Hamas unleashed a terrorist attack because they fear nothing more than Israelis and Palestinians living side by side in peace,” the president said. “To continue down the path of terror, violence, killing, and war is to give Hamas what they seek. We can’t do that.”
  • Armed attackers in Sierra Leone’s capital of Freetown assaulted a military base and two prisons on Sunday, releasing hundreds of inmates, in what government officials on Tuesday labeled an attempted coup. “The incident was a failed attempted coup,” Chernoh Bah, the country’s Information Minister, said yesterday. “The intention was to illegally subvert and overthrow a democratically elected government.” At least 19 people were killed in the fighting, and 13 military officers and one civilian have thus far been arrested in connection to the plot. Sierra Leone President Julius Maada Bio was re-elected this summer in a tight contest, but the results were disputed by the main opposition candidate and more than a dozen soldiers were arrested in August on charges of “subversion.”
  • Rescuers on Tuesday freed all 41 construction workers trapped in a Himalayan tunnel in India after a landslide blocked the exit on November 12. Teams of rescue workers laid pipes through 200 feet of rubble and debris—much of it excavated by hand after drilling machines broke down—to create an escape passage for the trapped men. Experts investigating what happened reportedly discovered that the tunnel was constructed without an emergency exit, leading the Indian government to order a safety audit of all tunnels currently under construction. 
  • Hunter Biden’s lawyer Abbe Lowell said Tuesday in a letter shared with the House Oversight Committee that the president’s son is willing to testify publicly in a hearing before the committee. “We have seen you use closed-door sessions to manipulate, even distort the facts and misinform the public,” Lowell wrote. “We therefore propose opening the door.” The Oversight Committee subpoenaed Hunter earlier this month to appear for a closed-door deposition, and Tuesday quashed the Biden legal team’s request. “Hunter Biden is trying to play by his own rules instead of following the rules required of everyone else,” Committee Chair James Comer said in a statement rejecting Hunter’s request for an open hearing, though he noted the younger Biden may be given the “opportunity to testify in a public setting at a future date.” 
  • Charles Munger, the billionaire former Berkshire Hathaway vice chairman, died yesterday at age 99. Munger worked as a close partner to Warren Buffett as the pair built Berkshire into an investing giant—Buffett credited him with developing the company’s investing blueprint. “Berkshire Hathaway could not have been built to its present status without Charlie’s inspiration, wisdom and participation,” Buffett said in a statement.

Altman is Out Altman is In

OpenAI CEO Sam Altman speaks during the OpenAI DevDay event on November 6, 2023 in San Francisco, California. (Photo by Justin Sullivan/Getty Images)
OpenAI CEO Sam Altman speaks during the OpenAI DevDay event on November 6, 2023 in San Francisco, California. (Photo by Justin Sullivan/Getty Images)

We’d be lying if we said we weren’t a little tired of comparing developments in artificial intelligence (AI) to movies. But the events of the last few weeks at OpenAI—an AI startup responsible for the now-ubiquitous generative AI, ChatGPT—have all the makings of a great Aaron Sorkin film, assuming AI doesn’t put him out of a job first. 

The five-day showdown between OpenAI’s CEO Sam Altman and the company’s board of directors was dramatic—Altman was abruptly terminated and then reinstated, and all but one of the members of the board who had engineered his firing were themselves let go. The saga is still shrouded in mystery, but at its core, the battle for OpenAI might illustrate how the divide over the speed with which the technology is developing is polarizing Silicon Valley.

Altman, along with nine others (including Elon Musk), founded the company at the heart of the recent AI boom in 2015. OpenAI’s structure is unusual: It was started as a nonprofit with the mission of ensuring “that artificial general intelligence benefits all of humanity.” The nonprofit framework was meant to insulate that mission from profit-driven incentives as the group developed its own AI models—but with the amount of capital needed to develop AI-level computing power, the nonprofit structure quickly became unsustainable. OpenAI ultimately formed a new, for-profit subsidiary in 2019 to generate investment and ultimately revenue, though this for-profit branch would remain accountable to the nonprofit’s board and mission. The board remained the steward of the nonprofit’s mission, and for-profit investors (including Microsoft, which owned a 49 percent stake in the company) had no input in company decisions.

OpenAI launched ChatGPT, a groundbreaking chatbot that can generate textual responses to user-input questions in real time, a year ago this week. The tool is a cross between a search engine—it can give you the answer to a simple question in a conversational way—and a robot that can do your homework, and its responses read eerily similar to human conversation. The generative AI can create or fix computer code, write an essay or other creative product, translate text from one language to another, or tell you what you should make for brunch for six people with dietary restrictions, among a host of other capabilities. (We asked ChatGPT to tell us what it is and what it does in the style and tone of an edition of TMD, and let’s just say we’re pretty confident it won’t be replacing us any time soon.) 

The chatbot’s inability to replicate our unparalleled editorial voice isn’t its only limitation. Sometimes ChatGPT “hallucinates”—that is, provides information that is simply incorrect, even if it sounds convincing (a word of caution for our younger readers trying to pass AI content off as their own schoolwork). The “large language model” (LLM) that creates the responses has to be fed vast quantities of information to keep up; the currently free, publicly available ChatGPT model is only working off data from before January 2022. 

Other tech companies soon jumped in the game and attempted to develop their own rival system (sometimes with … interesting results)—but OpenAI’s product remains shockingly superior, especially given its relatively small size and the massive size of its competitors, like Meta or Google parent company Alphabet. “The reason they’ve been able to set the market is because their technology is the best, and we don’t really know why,” Michael Frank, senior fellow in the Wadhwani Center for AI and Advanced Technologies at the Center for Strategic and International Studies (CSIS), told TMD

ChatGPT’s paradigm-shifting success put Altman in the spotlight as generative AI’s poster child: He testified before Congress in May, and quickly became D.C.’s go-to AI guru. He spoke in favor of regulations on AI research and development—though some of Altman’s critics warned it may lead to regulatory capture, cutting competition off at the knees by setting up barriers latecomers to the industry couldn’t overcome, and creating an AI oligopoly.

That spotlight, though, never shined brighter than when he was suddenly fired the Friday before Thanksgiving. The boardroom coup, which one high-profile tech investor likened to Steve Jobs’ 1985 ouster from Apple, went off with the support of four of the six-person panel’s members most concerned about the risk inherent in AI development (Altman himself was on the board, as was Greg Brockman, another co-founder). Among the members who voted to dismiss Altman was Ilya Sutskever, OpenAI’s chief scientist who is widely considered a visionary in the field. Sutskever and Altman had reportedly clashed over the speed at which AI technology was—and should be—developing. 

“Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities,” the panel said in a cryptic statement that seems to allude to OpenAI’s mission statement. “The board no longer has confidence in his ability to continue leading OpenAI.” 

The reasoning for that loss of confidence remains fuzzy, but the board and Altman were reportedly feuding over a paper one member, Helen Toner, had written in her capacity as a scholar at Georgetown University. The research seemed to be critical of OpenAI’s approach to AI safety and complimentary of a competitor, Anthropic.

By Friday night, Altman and Brockman—who was serving as president of OpenAI before he stepped down in solidarity with Altman—were reportedly preparing to pitch another AI startup. Altman and the board negotiated over the weekend to find a way to bring him back, but those talks broke down Sunday and Emmett Shear, the co-founder of the streaming service Twitch, was appointed interim CEO. In perhaps another clue into the board’s beef with Altman, Shear had previously suggested a preference for slowing the pace of AI research in the name of safety precautions. “If we’re at a speed of 10 right now, a pause is reducing to 0,” he tweeted in September. “I think we should aim for a 1-2 instead.”

Microsoft offered another solution to the leadership crisis. The company is deeply entwined with OpenAI—the minority partner has invested some $13 billion, provides the model’s cloud computing services, and uses OpenAI’s building blocks to power its own chatbot. However, Microsoft has neither a seat on the board nor any formal power over its actions. Early last week, the company suggested Brockman and Altman start their own AI shop at Microsoft. 

Altman’s firing went over like a ton of Nokia bricks with OpenAI’s employees, 702 (out of almost 800) of whom signed a letter to the board saying they had lost confidence in the panel and would join their former executives at Microsoft unless Altman was reinstated—an exodus that would have functionally ended the company. Among the signatories to that letter was Sutskever, who’d been the one to tell Altman he was fired during the Friday board meeting. In a tweet Monday, Sutskever said he “deeply regretted” his role in Altman’s ouster. 

By Tuesday, the warring factions had reached a deal: Altman was right back where he started as CEO and the board members who voted to fire him were themselves dismissed—with the notable exception of Adam D’Angelo, the CEO of Quora. Joining D’Angelo on the reconstituted board were former Treasury Secretary Larry Summers and the new chair, Bret Taylor, the former CEO of Salesforce who last year oversaw the $44 billion sale of Twitter to Musk.

Altman celebrated his return. “i love openai, and everything i’ve done over the past few days has been in service of keeping this team and its mission together [sic],” Altman tweeted Tuesday, effortlessly contravening every law of capitalization and punctuation known to man. “with the new board and w [Microsoft CEO Satya Nadella’s] support, i’m looking forward to returning to openai, and building on our strong partnership with [Microsoft]” [sic].

It’s not yet clear what the failed coup and its redesigned board will mean for OpenAI, but the episode has been viewed by many in the industry as a proxy war over the speed—and prioritization of safety—in AI development. There’s something of a spectrum among those who claim to feel some moral compunction about the apocalypse-movie-fodder potential of AI, and OpenAI’s mission statement and structure suggest the company is inherently interested in such concerns. It’s not yet clear whether that has changed, since the board members who seemed most sensitive to those concerns were shown the door. 

Those who favor slowing the development of AI—either through policy and oversight or the willing participation of developers—are sensitive to the potential for disaster when AI becomes AGI: artificial general intelligence. In Klon Kitchen’s words, AGI is “an AI that can learn, reason, and apply its intelligence universally, not confined to specialized tasks. It’s the ultimate goal: an AI with the versatility and adaptability of a human mind.” This camp—AI doomers, if you will—is broadly concerned about what happens if AI becomes sophisticated enough to make its own decisions, and what happens if those decisions are to the detriment of humanity. They believe the solution is to proceed cautiously and slowly, with serious ethical oversight. 

Others—so-called AI accelerationists—seek to balance safety with speed, and their concern about the risks of AI fuels a motivation to be on the cutting edge of the technology. Most members of this camp don’t favor rushing into AI development carelessly, but do tend to think technological development, particularly around AI, is inevitable and can be beneficial to humanity. “The unifying principle there is, ‘This is happening: There will be AI,’” CSIS’ Michael Frank told TMD. “And so if that’s the case, then you want the people who care about developing safe AI” to get there first. Many effective altruists think they “have a responsibility to be the first one[s] so that [they] can shape the future of AI,” he added.

But those who advocate for slowing the pace of research—or even pausing it—are fighting a losing battle, even if they share the same goals as those who favor stewardship by way of acceleration. And nevermind those in the industry who are primarily interested in the profit to be made in AI. “This reflects the fundamental reality that you have one group that needs the other group to agree with its perspective, and the [accelerationists don’t] need to win over the other group,” Frank said. 

As the industry squabbles over how much to regulate itself, however, governments and policymakers are beginning to involve themselves in the debate. President Joe Biden signed what may be the longest-ever executive order late last month, instructing executive agencies to research how AI might affect their work. A British meeting of world leaders and AI industry titans resulted in an acknowledgement of the great power and latent danger of AI with few concrete action items. Biden and Chinese President Xi Jinping agreed in principle this month on the threat of AI, particularly in a military context. And this past Monday, 18 countries (including the U.S. and Britain) signed an international agreement meant to protect AI from rogue actors and push companies to create systems that are “safe by design.” Still, there’s been no serious legislative effort to regulate AI research or development or set up meaningful guardrails for when and how to use AI.

Even as the hand-wringing continues over what the OpenAI saga means for the roiling debate, it’s possible the drama has been overinterpreted. “I think it would be a little harsh to say that this new iteration of OpenAI is abandoning those values [articulated in the mission statement],” Frank said. “I just look at the roster going into this—you have a lot of people who care about this and coming out the other side, the vast majority of them see their future continuing at OpenAI.”

Worth Your Time

  •  The Atlantic published the first excerpt of Tim Alberta’s latest book—The Kingdom, the Power, and the Glory: American Evangelicals in an Age of Extremism—in which Alberta recounted the stark awakening he experienced upon returning to his hometown church for his father’s funeral in 2019. “People from the church—people I’d known my entire life—were greeting me, not primarily with condolences or encouragement or mourning, but with commentary about [Rush] Limbaugh and Trump,” Alberta wrote. “Some of it was playful, guys remarking about how I was the same mischief-maker they’d known since kindergarten. But some of it wasn’t playful. Some of it was angry; some of it was cold and confrontational. One man questioned whether I was truly a Christian. Another asked if I was still on ‘the right side.’ All while Dad was in a box a hundred feet away.” Alberta’s father served as senior pastor at the church for 26 years. “Here, in our house of worship, people were taunting me about politics as I tried to mourn my father,” he wrote. “I was in the company of certain friends that day who would not claim to know Jesus, yet they shrouded me in peace and comfort. Some of these card-carrying evangelical Christians? Not so much. They didn’t see a hurting son; they saw a vulnerable adversary.”

Presented Without Comment 

CNN: Sports Illustrated Deletes Articles Published Under Fake Author Names and AI-Generated Profile Photos

Also Presented Without Comment

C-SPAN: In describing the support the Education Department could offer states earlier this month, Education Secretary Miguel Cardona reached for a familiar quote: “I think it was President Reagan [who] said, ‘We’re from the government, we’re here to help.’”

Also Also Presented Without Comment 

The Jewish News of Northern California: Oakland City Council Oks Cease-Fire Measure After Hours of Vitriol

Toeing the Company Line

  • Unrest in Ireland, 2024 polling numbers, and the latest developments in Trump’s legal cases. Kevin was joined by Adaam, James, Drucker, and Sarah to discuss all that and more on last night’s Dispatch Live (🔒). Members who missed the conversation can catch a rerun—either video or audio-only—by clicking here.
  • Alex pulled double duty on Tuesday, fact checking still more Pizzagate claims and analyzing an assertion by Kevin McCarthy that the U.S. never asked for land after winning a war.
  • In the newsletters: Nick assessed how the 2024 GOP field is kinda, sorta coalescing—unlike in 2016.
  • On the podcasts: Jonah is joined by his fellow American Enterprise Institute scholar Danielle Pletka on The Remnant to discuss the ongoing wars in Ukraine and the Middle East. 
  • On the site: Kevin asserts that both parties are alienating the folks we used to call yuppies, while Jonah argues that Democrats have alienated the working class in their swing toward identity politics.

Let Us Know

When it comes to artificial intelligence, would you label yourself a doomer or an accelerationist? Something in between?

James Scimecca works on editorial partnerships for The Dispatch, and is based in Washington, D.C. Prior to joining the company in 2023, he served as the director of communications at the Empire Center for Public Policy. When James is not promoting the work of his Dispatch colleagues, he can usually be found running along the Potomac River, cooking up a new recipe, or rooting for a beleaguered New York sports team.

Mary Trimble is the editor of The Morning Dispatch and is based in Washington, D.C. Prior to joining the company in 2023, she interned at The Dispatch, in the political archives at the Paris Institute of Political Studies (Sciences Po), and at Voice of America, where she produced content for their French-language service to Africa. When not helping write The Morning Dispatch, she is probably watching classic movies, going on weekend road trips, or enjoying live music with friends.

Grayson Logue is the deputy editor of The Morning Dispatch and is based in Philadelphia, Pennsylvania. Prior to joining the company in 2023, he worked in political risk consulting, helping advise Fortune 50 companies. He was also an assistant editor at Providence Magazine and is a graduate student at the University of Edinburgh, pursuing a Master’s degree in history. When Grayson is not helping write The Morning Dispatch, he is probably working hard to reduce the number of balls he loses on the golf course.

Share with a friend

Your membership includes the ability to share articles with friends. Share this article with a friend by clicking the button below.

Please note that we at The Dispatch hold ourselves, our work, and our commenters to a higher standard than other places on the internet. We welcome comments that foster genuine debate or discussion—including comments critical of us or our work—but responses that include ad hominem attacks on fellow Dispatch members or are intended to stoke fear and anger may be moderated.

You are currently using a limited time guest pass and do not have access to commenting. Consider subscribing to join the conversation.

With your membership, you only have the ability to comment on The Morning Dispatch articles. Consider upgrading to join the conversation everywhere.