Skip to content
Tomorrow Never Knows
Go to my account

Tomorrow Never Knows

The Biden administration seeks to rein in rapidly advancing artificial intelligence tech.

Happy Friday! You’ve probably heard it Here, There, and Everywhere by now—but Yesterday, nearly 44 years after it was first recorded, The Beatles released a brand new song titled “Now and Then.” Surviving Beatles Paul McCartney and Ringo Starr Came Together to finish the track—originally a John Lennon demo, buttressed with recordings of George Harrison from the 1990s—With a Little Help from AI.

Quick Hits: Today’s Top Stories

  • The Wall Street Journal reported on Thursday that Wagner group, the Russian mercenary organization, plans to send an air-defense system to Hezbollah, the Iran-backed terrorist organization based in Lebanon. The report, based on intelligence cited by U.S. officials, comes ahead of a major speech Hezbollah leader Hassan Nasrallah will deliver this morning—which experts believe could shed light on Hezbollah’s intentions for its future involvement in the Hamas-Israel war.
  • Secretary of State Antony Blinken arrived in Israel today, reportedly to press the Israeli government to agree to regular “humanitarian pauses” in hostilities in Gaza to allow hostages to be released and aid to be distributed in the Strip. Israeli Prime Minister Benjamin Netanyahu had previously ordered a pause in bombing Gaza to allow two American hostages to be released, President Joe Biden revealed on Wednesday. Meanwhile, the House of Representatives voted 226-196 on Thursday—with 12 Democrats voting with all but two Republicans—to approve $14.3 billion in aid to Israel tied to domestic spending cuts—and without funds for Ukraine. The package is dead on arrival in the Democratic-controlled Senate, where many lawmakers favor linking the Israel aid to funding for Ukraine’s war effort—without spending cuts on domestic priorities.
  • Pakistan began deporting undocumented Afghan refugees on Wednesday, sending them back to Taliban-run Afghanistan, in accordance with the November 1 deadline to leave the country set earlier last month. Some of the 1.7 million Afghans in Pakistan without papers have lived there many years—or were even born there. Others fled to the neighboring country after the Taliban takeover of Afghanistan in 2021. Around 200,000 undocumented Afghans have already voluntarily returned to Afghanistan.
  • The Senate confirmed several high-ranking military officials on Thursday, including Adm. Lisa Franchetti as chief of naval operations, making her the first female member of the Joint Chiefs. The confirmations came in spite of a months-long blockade by Republican Sen. Tommy Tuberville of Alabama, who prevented the Senate from confirming hundreds of military promotions in large batches—as is typically done for non-controversial nominations through a process known as unanimous consent—over concerns about the Pentagon’s abortion policy. The move to confirm the military leaders came in response to pressure from senators of Tuberville’s own party, who argued that his actions undercut military readiness.
  • A jury on Thursday found Sam Bankman-Fried, who founded the FTX cryptocurrency exchange, guilty on seven charges related to wire fraud, money laundering, and conspiracy. The month-long trial revealed evidence that Bankman-Fried stole $10 billion from customers to fund political contributions, risky investments, and real estate ventures, among other expenses. Sentencing is currently scheduled for March, but Bankman-Fried is expected to appeal the verdict.
  • Desmond Mills Jr., one of the five former Memphis police officers accused of fatally beating Tyre Nichols in January, changed his plea from “not guilty” to “guilty” on Thursday in two of the four federal charges against him: using excessive force and failing to intervene in the unlawful assault, and conspiring to cover up his use of unlawful force. According to prosecutors, Mills has also reportedly agreed to plead guilty to related state charges against him as part of the plea deal, and will be called to testify against the other four defendants. State and federal prosecutors said they would recommend Mills serve 15 years in prison, though the sentencing decision will be up to the judge.
  • The FBI on Thursday searched the home of Brianna Suggs—a key ally of and fundraiser for New York City Mayor Eric Adams—as part of a broad investigation into the mayor’s 2021 campaign involving a potential straw-donor scheme to funnel foreign money into campaign accounts. The search reportedly prompted Adams to cancel several meetings in D.C. on Thursday morning and return to the city to deal with the fallout.
  • Republican Sen. Rick Scott of Florida endorsed former President Donald Trump in the 2024 GOP presidential primary Thursday, snubbing Florida Gov. Ron DeSantis. “I am optimistic that we can return America to its rightful position of economic and military strength and the undisputed moral leader of the free world, but only with strong leadership in the White House,” Scott, who is also the former governor of Florida, wrote for Newsweek. “That is why I support my friend President Donald J. Trump to be the 47th president of the United States and encourage every Republican to unite behind his efforts to win back the White House.”

AI, Robot

(via Getty Images)
(via Getty Images)

President Joe Biden signed a sweeping executive order (EO) on artificial intelligence (AI) this week, and it was—at least partly—inspired by Tom Cruise. White House deputy chief of staff Bruce Reed revealed that the movie star’s latest installment of the Mission: Impossible series, which features sentient AI as the film’s villain, added to Biden’s worries about the futuristic technology. “If he hadn’t already been concerned about what could go wrong with AI before that movie, he saw plenty more to worry about,” Reed said after watching the film with the president. 

They’d better not show him Godzilla

On Monday, the president signed what is likely the longest EO in history: The “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” At more than 100 pages in length, the order lays out a framework of guardrails for AI research and development, and it comes as governments across the globe race to keep up with a technology that’s progressing at a near-exponential pace. But length is not a guarantor of efficacy, and it’s far from clear whether state actors will be able to effectively mitigate the risks posed by such an advanced technology. 

“To realize the promise of AI and avoid the risk, we need to govern this technology,” Biden said in remarks at a signing ceremony on Monday. “Companies must tell the government about the large-scale AI systems they’re developing and share rigorous, independent test results to prove they pose no national security or safety risk to the American people.” The EO deploys a “whole-of-government” approach that has become a staple of the administration’s executive actions, providing directives across most federal agencies. 

In broad strokes, the order mandates the development of standards, monitoring, and review processes for how the government uses AI; it sets guidance on the use of AI across economic sectors as well as requirements for what the biggest AI developers must share with federal agencies about their systems. Specifically, the EO directs the National Institute of Standards and Technology to come up with guidelines for “developing and deploying safe, secure, and trustworthy AI systems.” The Department of Energy is tasked with AI model evaluation tools and “testbeds” to assess whether outputs of AI systems pose “nuclear, nonproliferation, biological, chemical, critical-infrastructure, and energy-security threats or hazards.” 

Much of the order deals with getting the executive branch up to speed on AI, directing different agencies—including the Departments of Homeland Security, Defense, and Treasury—to assess and issue reports on the risks and opportunities of AI as it relates to everything from financial markets to critical infrastructure to health care to civil rights to disinformation and deep fake content. The administration also directed agencies to evaluate the opportunities for innovation that AI presents, and to avoid blanket bans on government use of AI systems. The Office of Budget and Management was tasked with staffing up the government with AI researchers and experts. The administration also invoked the authority of the Defense Production Act to direct private companies building the most powerful AI systems to notify the government of relevant actions—including when they are training their models—and to share the results of safety tests. 

“The executive agencies have already been engaged in rulemaking on AI over the last several months,” Michael Frank, senior fellow for AI and advanced technologies at the Center for Strategic and International Studies, told TMD. “This [EO], in some ways, is providing the cover for the agencies that haven’t already engaged in that process, and that will be one of the biggest impacts.”

That cover is particularly useful in an area where most of the rules have yet to be written. “The Biden EO is significant because unless and until a fractious Congress comes together, it’s likely to be the only set of AI rules on the books in the United States,” Jessica Brandt, the policy director for the Brookings Institution’s Artificial Intelligence and Emerging Technology Initiative, told TMD. “For that reason, it helps create a bit of certainty amidst an at times chaotic debate.” 

That said, some researchers worry the order could suffocate innovation and lead to the U.S. losing its edge over geopolitical competitors like China. “The EO appears to be empowering agencies to gradually convert voluntary guidelines into a sort of back-door regulatory regime for AI, a process which would be made easier by the lack of congressional action on AI issues,” said Adam Thierer, a senior fellow for technology and innovation at R Street Institute. “The danger exists that the U.S. could undermine the policy culture that made our nation a hotbed of digital innovation and investment.” The debate over AI regulation is likely to become even more intense as new developments in the young technology come down the pike—less than a year ago, OpenAI released GPT-4, the generative AI chatbot that helped supercharge public awareness and use of AI systems.

For now, there are two buckets of AI that present the primary causes for concern: general-purpose AI—including so-called “foundation models,” like the large language model ChatGPT—and narrow AI tailored to specific tasks. “Foundation models (FMs) are an AI trained on large datasets that show generalized competence across a wide variety of domains and tasks, such as answering questions, generating images or essays, and writing code,” the RAND Foundation explained in a recent stakeholder report on AI security guardrails. “The generalized competence of FMs is the root of their great potential, both positive and negative. With proper training, FMs could be quickly deployed to enable the creation and use of chemical and biological weapons, exacerbate the synthetic drugs crisis, amplify disinformation campaigns that undermine democratic elections, and disrupt the financial system through stock market manipulation.”

FMs are systems trained on vast amounts of data—in some cases, big chunks of the entire internet—that some of the largest AI developers (OpenAI, Google, and the Amazon-backed Anthropic, among others) are spending hundreds of millions of dollars to improve. These systems do not currently possess human levels of intelligence—also known as general intelligence (think of your average sci-fi movie where AI is the bad guy)—but some researchers and lawmakers are concerned that the rapid advancement of these systems could lead to the creation of artificial general intelligence (AGI), resulting in catastrophic consequences and possibly human extinction. Scientists have speculated for decades about the risk of AGI, but the exponential growth of AI systems’ power in recent years has heightened such concerns. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the Center for AI Safety said in a one-sentence open letter signed by hundreds of AI scientists earlier this year.

For some AI policy experts, a focus on AGI distracts from the more immediate and non-theoretical risks posed by FMs and narrower AI systems’ current and soon-to-be-achieved capabilities. “I understand that the ‘robots taking over’ is more exciting to have as a dinner conversation, but that is not where governance conversation should be,” Ian Bremmer, the president of the Eurasia Group and a member of the United Nations’ new AI advisory body, told TMD. “When you’re just trying to drive on a road and there’s potholes everywhere, you have to fill the potholes. You don’t think about what’s on the interstate in the next county.”

General-purpose AI—still technological leagues short of AGI—can lead to advances that are difficult to predict. The sheer gains in computing power, which appear to be growing by a factor of 10 each year, open up new possibilities for good and for ill. Researchers could use the models to develop new cures for diseases, while terrorists could use the same models to help create bioweapons—and the same could be said of cyber weapons and even designs for nuclear weapons. The proliferation of AI could drastically reduce barriers to entry and costs for rogue actors to worsen all sorts of security risks—nevermind the effects AI-generated content could have on the information ecosystem, politics, and elections. More specialized AI also poses bias risks depending on the data that is fed into the system: An AI system fed skewed datasets to make decisions about credit scores or loans, for example, could risk magnifying errors or discrimination baked into the inputs.

According to one AI policy researcher who has worked on technology policy in multiple administrations and requested anonymity to speak candidly, the success of the Biden administration’s approach to AI will depend on how effectively the agencies are actually able to implement the directives. Plus, aside from requirements to share information with the government, the recent EO lacks enforcement mechanisms to prevent companies from making model advancements or releasing them to the public. 

This may relegate the Biden administration’s order to more of a suggestion to the private sector than an enforceable directive. “The White House has a lot more power over federal agencies than it does private companies,” said Brandt. “What power would the government have if testing revealed problematic results?” How much authority the executive branch agencies have to enforce new AI guidelines and standards once developed will likely be sorted out in the courts. 

Across the globe, nations are waking up to the implications of AI and the need for global cooperation to set some rules of the road. The United Kingdom hosted an AI Safety Summit this past week at Bletchley Park—the place where codebreakers cracked the German code in World War II using the precursors to modern computers. The event produced a declaration on AI safety and risk mitigation signed by more than two dozen countries, notably including both the U.S. and China. “This is the beginning of a global dialogue that needs to take place,” Frank, who attended the conference, told TMD. “So far, it seems to be a constructive one.” G7 countries also agreed to a statement of principles last week and a voluntary code of conduct for AI developers. Some tech experts have even proposed an independent global body—akin to the Intergovernmental Panel on Climate Change—to establish objective international standards and review for AI systems.

When it comes to regulating AI, however, most governments are locked in a desperate game of catch up with the companies developing more and more advanced systems. “The major technology companies are on stage and part of the conversation in equivalent ways to the governments,” Bremmer said. “They’re not only the ones that are determining outcomes, but they’re even the ones that are defining the question set by virtue of the kinds of technologies they’re developing, and how they decide to create their platforms. And that has an enormous amount of power.” 

Worth Your Time

  • A new project from anonymous web developers is trying to map each individual murder and abduction Hamas and allied terrorist groups committed in Israel on October 7. The site, “Mapping the Massacres,” provides an interactive, visual representation of the carnage on that day, resulting in a sea of red and black dots—representing people killed and kidnapped, respectively—across southern Israel. The site of the Nova Music Festival, where Hamas terrorists murdered concert-goers, is almost a solid block of red. “The October 7th Geo-visualization Project strives to provide a comprehensive representation of the atrocities committed by Hamas on that day,” reads the “About” info on the site. “This interactive map serves as a reflection and an educational tool, promoting awareness of the gravity of the horrors.” Where available, the site provides pictures and a brief bio of the victims, as well as their age. Some entries, like that for Chen Buchris, age 26, include what they were doing when they were killed. “Chen’s unit was assigned the sector of Nahal Oz to breach and rescue,” the entry reads. “Leading the charge, Chen and his team showcased incredible bravery. His unit members said they were heroes, especially for composure when facing a vast number of terrorists whom they effectively neutralized. Chen and 2 of his comrades were killed in the battles.” 

Presented Without Comment

The Hill: Johnson on First Week as Speaker: “This is Like an F5 Hurricane” 

Toeing the Company Line

  • Reminder: The Dispatch is looking for an assistant editor to play a key role on our editing team. An obsessive focus on detail and accuracy is crucial, as is the ability to see the big picture and provide substantive and structural edits. Think you—or someone you know—might be a fit? Apply here.
  • In the newsletters: Michael and Sarah checked in on legal efforts to keep Trump off the ballot in 2024, while Nick marveled (🔒) at House Republicans’ effort to censure Rashida Tlaib and expel George Santos.
  • On the podcasts: Sarah, Jonah, and Steve are joined by Jamie Weinstein to discuss the Republican primary, the rise of antisemitism, and more on The Dispatch Podcast
  • On the site: Drucker explores whether the GOP field will winnow, Gary Schmitt argues Biden should dump Kamala, and Nicholas Carl explains Iran’s axis of resistance. 

Let Us Know

Do you think the U.S. should develop a new national security branch, à la Space Force, to focus specifically on A.I.?

James Scimecca works on editorial partnerships for The Dispatch, and is based in Washington, D.C. Prior to joining the company in 2023, he served as the director of communications at the Empire Center for Public Policy. When James is not promoting the work of his Dispatch colleagues, he can usually be found running along the Potomac River, cooking up a new recipe, or rooting for a beleaguered New York sports team.

Mary Trimble is the editor of The Morning Dispatch and is based in Washington, D.C. Prior to joining the company in 2023, she interned at The Dispatch, in the political archives at the Paris Institute of Political Studies (Sciences Po), and at Voice of America, where she produced content for their French-language service to Africa. When not helping write The Morning Dispatch, she is probably watching classic movies, going on weekend road trips, or enjoying live music with friends.

Grayson Logue is the deputy editor of The Morning Dispatch and is based in Philadelphia, Pennsylvania. Prior to joining the company in 2023, he worked in political risk consulting, helping advise Fortune 50 companies. He was also an assistant editor at Providence Magazine and is a graduate student at the University of Edinburgh, pursuing a Master’s degree in history. When Grayson is not helping write The Morning Dispatch, he is probably working hard to reduce the number of balls he loses on the golf course.

Please note that we at The Dispatch hold ourselves, our work, and our commenters to a higher standard than other places on the internet. We welcome comments that foster genuine debate or discussion—including comments critical of us or our work—but responses that include ad hominem attacks on fellow Dispatch members or are intended to stoke fear and anger may be moderated.

You are currently using a limited time guest pass and do not have access to commenting. Consider subscribing to join the conversation.