Skip to content
Concerns Over Unfettered AI Development
Go to my account

Concerns Over Unfettered AI Development

Plus: Screenwriters go on strike.

Happy Wednesday! We have two main stories for you today, and neither is about politics. We hope you enjoy the break as much as we did.

Quick Hits: Today’s Top Stories

  • A Defense Department spokesman announced Tuesday the Pentagon will send 1,500 active-duty troops to the Southern border next week ahead of an anticipated surge of migrants following the end of a pandemic-era border restriction. The troops, deployed for 90 days, are expected to perform administrative duties usually handled by Customs and Border Protection officers, freeing the latter group up to be in the field. 
  • The Justice Department announced nearly 300 arrests on Tuesday as part of an international operation targeting so-called “darknet” drug trafficking, especially of fentanyl and opioids. The operation, which lasted more than 18 months and spanned three continents, shut down an online drug marketplace and led to the seizure of more than $50 million in cash and virtual currencies—as well as nearly 2,000 pounds of drugs.
  • Pornhub—one of the most-visited pornography platforms in the United States—blocked users in Utah from accessing its site this week in protest of the state’s new age-verification law. The law, which was signed in March and went into effect yesterday, requires users to verify they are over 18 years old with a “digitized identification card” before they can enter the site. Pornhub claimed that, without consistent enforcement, the measure will only drive traffic to other, less-secure sites.
  • Republican Oklahoma Gov. Kevin Stitt signed a law on Monday making it a felony to perform gender-transition procedures on minors, as well as provide them with puberty-blocking drugs or certain hormones. Oklahoma is the latest Republican-led state to limit such procedures.
  • U.S. Attorney Andrew Luger announced this week a 36-year-old Minnesota man had been arrested and charged with arson for allegedly starting fires at two Minnesota mosques last week that resulted in tens of thousands of dollars of damage. The man—who reportedly suffers from bipolar disorder—was also captured on surveillance video vandalizing Democratic Rep. Ilhan Omar’s district office in January.
  • The Labor Department reported Tuesday job openings fell from 10 million in February to 9.6 million in March—the lowest level since April 2021—indicating the demand for workers may be cooling ahead of the Federal Reserve’s decision on interest rates later today. The quits rate—the percentage of workers who quit their job during the month—ticked down to 2.5 percent, while the number of layoffs and discharges edged up slightly to 1.8 million.
  • Texas law enforcement officials on Tuesday arrested the gunman who allegedly shot and killed five of his neighbors over the weekend, ending a four-day manhunt for the suspect, who is believed to be in the country illegally. Authorities confirmed Tuesday the alleged shooter’s wife had filed a protective order against him last year, claiming he beat her.
  • Washington Democratic Attorney General Bob Ferguson launched his campaign for governor on Tuesday, one day after Democratic Gov. Jay Inslee announced he would not run for reelection. Meanwhile, former Maryland Gov. Larry Hogan, a Republican, said Tuesday he would not run for the Senate seat being vacated by retiring Democratic Sen. Ben Cardin, and former Nevada state lawmaker Jim Marchant—a Republican who ran a failed, Trump-backed campaign for secretary of state in 2022—announced Tuesday he will challenge sitting Democratic Sen. Jacky Rosen for her seat in 2024.
  • Canadian singer-songwriter Gordon Lightfoot died on Monday at the age of 84. The musician penned others’ Top 40 hits in the 1960s and 1970s before topping the charts himself with songs like “If You Could Read My Mind” and “The Wreck of the Edmund Fitzgerald.”

Inevitable AI

The home page of the artificial intelligence OpenAI website, displaying its chatGPT robot in March 2023. (Photo by MARCO BERTORELLO/AFP via Getty Images)
The home page of the artificial intelligence OpenAI website, displaying its chatGPT robot in March 2023. (Photo by MARCO BERTORELLO/AFP via Getty Images)

To demonstrate the flaws of current artificial intelligence tools, we could start with the thousands of people fooled by a fake photo of the pope wearing a slick white puffer jacket. Or the AI-generated news articles full of plagiarism and falsehoods. Or the artists seeing their work scraped from the internet and regurgitated without compensation. Or the Afghan who had her refugee claim denied after a machine translation introduced errors into her paperwork.

But why not start with the people who helped develop these tools? “If I hadn’t done it, somebody else would have,” artificial intelligence pioneer Geoffrey Hinton told the New York Times this week, embracing the tech worker tradition of quitting a major company—in Hinton’s case, Google—to loudly warn of the dangers they’ve designed. “It is hard to see how you can prevent the bad actors from using it for bad things,” he said of AI technology. “I don’t think they should scale this up more until they have understood whether they can control it.”

Tech types have long had an unfortunate habit of both telling the public not to fret about what they’re up to and describing their work as a new Manhattan Project or forging Sauron’s One Ring. But in recent months—as several generative AI tools hit the market—the chorus of developers warning they might not be able to control what they unleash has gotten louder, and a vocal minority has urged the industry to hit pause on AI development. We’re unlikely to see a moratorium—whatever that would look like—but regulators and lawsuits have begun tackling more immediate problems the technology has wrought, like copyright infringement and misinformation.

Existing AI tools are already producing impressive results. ChatGPT, for example, draws on reams of training data to predict what word should come next in a sentence and can hold convincing conversations, answer complex queries, and improve programmers’ code. Research studies suggest other AI models can improve cancer detection. And while many teachers have caught students using AI to cheat, others are incorporating these tools into their classes. English teacher Kelly Gibson told Wired she’s having students generate text using AI and then edit it themselves, fixing false details and clunky style. “I want AI chatbots to become like calculators for writing,” she said. 

But even more sophisticated AI models are coming, and some tech workers are worried they will flood the airwaves with propaganda, automate jobs—not necessarily a disaster, Scott argued in February—and even pursue their own goals. Think villainous robots rewriting their own code and attacking humans to keep us from pulling the plug. Misinformation and a rearranged job market are more likely than sci-fi movies come to life, but thousands of researchers, pundits, and tech leaders have signed an open letter published in late March warning that AI poses “profound risks to society and humanity” and calling for a six-month development pause—government enforced, if necessary—while AI labs plan safety protocols. Signatories include Apple cofounder Steve Wozniak, Elon Musk, Siri designer Tom Gruber—and even Emad Mostaque, head of AI developer Stability AI. Computer scientist Eliezer Yudkowsky suggested in a Time op-ed governments should enforce a moratorium with airstrikes, if necessary—a fringe position even among AI doomers, but an example of how much of a threat some feel AI poses.

Companies developing AI technologies haven’t exactly embraced such a moratorium, and governments haven’t coordinated broad enforcement efforts. Asked if AI is dangerous, President Joe Biden was noncommittal: “It remains to be seen. Could be.”

Meanwhile, other researchers say the hyperbolic hypotheticals featured in the open letter elide the very real problems already in existence. A team testing GPT-4 before the language model’s launch found it could coach users on buying illegal guns or making dangerous compounds from household materials. Developers installed some safeguards before launch, but the exercise made clear they didn’t think of everything. And given how good humans are at finding unintended uses for fancy new tools, they couldn’t possibly. 

GPT tools have been used to write phishing emails and improve code for cybercriminals, and AI language models frequently “hallucinate,” blithely asserting falsehoods and concocting evidence to support them. It’s not like we had solved the problem of lies on the internet before AI came along, but it’s certainly made producing convincing false content easier and has already been featured in political advertising.

Then there’s the bizarre turns chats with AI tools can take. In one particularly vivid example from a few months ago, tech journalist Kevin Roose held a two-hour conversation with Microsoft Bing’s chatbot that ended with the bot telling Roose it loved him and demanding he leave his wife. Despite Roose’s efforts to change the subject, the bot insisted “you’re married, but you love me.” The chatbot’s lovesick ramblings didn’t derail Roose’s marriage (we assume), but it’s easy to imagine chatbots amplifying harmful tendencies, such as suicidal thoughts in people who are depressed.

As artificial intelligence is increasingly incorporated into everyday life, some artists, companies, and regulators are starting to take umbrage at how the tools are trained in the first place—details AI developers prefer to keep under wraps. OpenAI—the company behind ChatGPT—says it trained its model “using both publicly available data (such as internet data) and data licensed from third-party providers,” but that it’s hiding other details due to “the competitive landscape and the safety implications of large-scale models like GPT-4.” In general, however, these models rely on vast troves of data often obtained without asking creators’ consent. A Washington Post analysis of one AI training data set found information from news websites trawled without permission and pirated ebooks from a website since shut down by the Department of Justice.

Italian regulators in March banned ChatGPT until its developers could demonstrate they’d implemented changes like blocking underage users and allowing people to submit requests to remove their data. Access in Italy was restored last week, but ChatGPT maker OpenAI may yet have choppy regulatory waters ahead—Canada’s privacy commissioner last month launched an investigation of the developer over a complaint alleging “the collection, use and disclosure of personal information without consent.” A group of artists in the United States has sued Midjourney and Stability AI—services that use artificial intelligence to generate images—alleging they “violated the rights of millions of artists” by using their work to develop their products without permission. Getty Images—which provides stock images and editorial photographs—has also sued Stability AI, pointing out the company’s Stable Diffusion program occasionally produces pictures with visible Getty Images watermarks.

Lawsuits and regulation may alter how AI companies operate, but in the meantime developers are making it up as they go along, trying to predict and prevent harms without compromising their competitive edge. Industry leaders—like Sam Altman, CEO of OpenAI—are both eager for profits and excited for the potential benefits of ever more sophisticated AI, including an artificial general intelligence (AGI) mirroring the wide range of human intellect. “The upside of AGI is so great,” Altman wrote in February. “We do not believe it is possible or desirable for society to stop its development forever.”

Writers on Strike—But Nobody Told Us! 

During the 2007-2008 writers strike, Conan O’Brien—then host of the Late Showkilled airtime by seeing how long he could spin his wedding ring on top of his desk. In the middle of the 1988 strike, David Letterman decided to get a shave on TV. Jimmy Kimmel, Stephen Colbert, and Jimmy Fallon aired reruns last night, but we’re eager to see what the now writerless late-night hosts come up with if this drags on long enough.

Yesterday, screenwriters went on strike for the first time in more than 15 years. As picket lines go up in New York and Los Angeles, the walkout will slowly begin to affect the content engines of studios, television networks, and streaming companies—but how much viewers see the disruption will depend on how long it takes the two sides to strike a deal.

The Writers Guild of America (WGA) called 11,500 of its members out on strike Tuesday after 97 percent of voting members voted to authorize the move. The strike went into effect after the WGA’s 2020 contract with the Alliance of Motion Picture and Television Producers (AMPTP)—the trade association that negotiates for movie and television production companies like Netflix, Amazon, Apple, and Disney—expired. 

The move came after six weeks of failed negotiations between the guild and the production companies. “The studios’ responses have been wholly insufficient given the existential crisis writers are facing,” WGA said in a statement. “The companies’ behavior has created a gig economy inside a union workforce, and their immovable stance in this negotiation has betrayed a commitment to further devaluing the profession of writing.”

Screenwriting, once viewed as a glamorous and reasonably lucrative job, has suffered in the era of streaming. “If your first job was a streaming job you really had no idea what you were missing out on,” Brittani Nichols, a writer on the Emmy award-winning show Abbott Elementary, told Bloomberg. “When you get to the mid level, you assumed you would make a career out of this and be stable. But then you get there and you aren’t.” 

If public comments are any indication, the two sides remain far apart. The writers are pushing for terms that’d collectively net them an additional $429 million annually, while the AMPTP reportedly countered with a package equivalent to an $86 million annual increase—though all these numbers are hypothetical and contingent on viewership trends. While likely to narrow, the enormous gap between the two sides is indicative of the angst with which both sides are responding to the technological changes turning the industry on its head.

The shift from broadcast to streaming has meant fewer jobs for screenwriters, often with less pay. Gone are the days where a writer can earn a steady income working on 25 episodes of Friends over the course of a year; shorter seasons and the use of “mini-rooms” have allowed production studios to employ writers for shorter periods of time. Meanwhile, the WGA argues writer residuals—essentially royalty payments—have not been updated for the post-syndication and post-DVD world. The guild is asking for a transparent pay-per-view formula for residuals, but streaming companies have famously been reluctant to make their viewership data public.

Writers are also calling for contract protections against the use of new artificial intelligence tools, including prohibitions on writers’ material being used to train AI. “This is an existential fight for the future of the business of writing,” Laura Jacqmin, a TV writer for shows on Epix and Peacock, told The New Yorker.“If we do not dig in now, there will be nothing to fight for in three years.”

Across the negotiating table, the production companies believe some of the WGA demands—like requiring a minimum of six writers even for smaller shows or successful shows (e.g., White Lotus) that only have one writer—are unreasonable. Plus, they’ve argued now is not the time to increase costs. Networks have seen profits plummet over the past year, and entertainment giants like Disney and Warner Bros.-Discovery have cut thousands of employees. 

So, what will the strike mean for your favorite TV shows? In the short term, probably not much—other than for late-night shows. The labor strife comes as most series have completed the writing for their current seasons, and streaming companies have been preparing to weather the storm. “We have a large base of upcoming shows and films from around the world that we could probably serve our members better than most,” Ted Sarandos, co-CEO of Netflix, said on an April earnings call when asked about the potential of a strike. Companies have also reportedly been stockpiling scripts in recent months. 

The 2007-2008 strike lasted for 100 days, disrupting both late-night TV and shows still in development. The Office and other sit-coms had to shorten their seasons, and films like James Bond: Quantum of Solace were filmed with essentially a skeleton script—Daniel Craig said the strike “f—ed” the movie. But there were some silver linings, as well. Breaking Bad—in its first season at the time–cut two planned episodes during the strike, and Vince Gilligan, the series’ showrunner, later revealed he had planned to kill off either Jesse Pinkman or Hank Schrader in one of them. Should the current strike last into the summer, we may see similar effects—for good and for ill—on the fall TV season.

One segment of show business that could see a boost from the strike is reality television—some of the only shows that don’t need writers. In 2007, a floundering reality TV show where people competed for a job under the wing of a certain well-known business executive found a second life filling in the programming gaps left by the work stoppage. And we all know how that ended.

Worth Your Time

  • It’s been three months since twin earthquakes flattened much of southern Turkey. How much of the devastation could have been avoided if President Recep Tayyip Erdoğan’s ruling party had better enforced the law? “Slow violence is not slow in Turkey,” Justus Links writes for N+1 magazine. “Death comes quickly, in large numbers, and without accountability on the part of those in power. The government implemented stricter building codes after the 1999 earthquake—setting higher standards for materials and engineering calculations to ensure that buildings would withstand future earthquakes—but it has systematically failed to enforce them. Builders have been allowed to hire private inspectors who sign off on substandard construction, and Erdoğan himself has issued numerous ‘amnesties,’ legalizing unregistered buildings in exchange for a fine. How many could have been rescued had AFAD been better funded, or had alternative rescue efforts not been blocked by the state? What would the earthquake-stricken landscape look like if the AKP-connected construction industry had had to abide by building codes?” 

Presented Without Comment

Washington Post: Trump, Irritated by Questions About Manhattan Probe, Sought Reporter’s Removal

“Former president Donald Trump got so irritated with an NBC reporter’s questions about a Manhattan criminal investigation that he grabbed the journalist’s phones and demanded that he be removed from an airplane interview, according to audio of the exchange obtained by the Washington Post.

‘I don’t want to talk to you,’ Trump said. ‘You’re not a nice guy.’

When [NBC reporter Vaughn] Hillyard presses on, Trump is heard demanding, ‘Let’s go, get him out of here. Outta here! Outta here!’ and then asking if a phone on the table is Hillyard’s.

‘Whose is this?’ Trump asked.

‘That one’s mine, too,’ Hillyard said, referring to another phone.

In the audio, a soft thud can be heard as Trump tossed the phones to the side.”

Toeing the Company Line

  • In the newsletters: Haley takes a look at (🔒) the state of debt ceiling negotiations ahead of the new June 1 deadline, Sarah predicts (🔒) the extinction of the Manchins and Sinemas of the world, and Nick pens a (🔒) half-hearted defense of CNN’s decision to host a town hall with Donald Trump.  
  • On the podcasts: Screenwriter and satirist Rob Long joins Jonah to discuss the Hollywood writers’ strike.
  • On the site today: Jonah argues that the loudest and most extreme voices in the GOP are chasing away sensible voters, and Kevin explains why ending a payroll-tax cap won’t save Social Security.

Let Us Know

As the TMD team has grown, you may have noticed an uptick in the number of topics we’re able to cover—and in the length of the newsletter. Which of the following comes closest to your view?

  1. TMD is a good length as-is, and I regularly make it through the entire newsletter.
  2. I don’t always make it through the entire newsletter, but I appreciate the additional depth and context TMD provides.
  3. TMD is probably too long, but I’m able to quickly pick out the sections that interest me most.
  4. TMD is too long, and its length detracts from the reader experience.

(And, yes, we realize this is not a scientific survey because, well, everyone who answers will have made it to the end of this newsletter!)

Declan Garvey is the executive editor at the Dispatch and is based in Washington, D.C. Prior to joining the company in 2019, he worked in public affairs at Hamilton Place Strategies and market research at Echelon Insights. When Declan is not assigning and editing pieces, he is probably watching a Cubs game, listening to podcasts on 3x speed, or trying a new recipe with his wife.

Esther Eaton is a former deputy editor of The Morning Dispatch.

Mary Trimble is the editor of The Morning Dispatch and is based in Washington, D.C. Prior to joining the company in 2023, she interned at The Dispatch, in the political archives at the Paris Institute of Political Studies (Sciences Po), and at Voice of America, where she produced content for their French-language service to Africa. When not helping write The Morning Dispatch, she is probably watching classic movies, going on weekend road trips, or enjoying live music with friends.

Grayson Logue is the deputy editor of The Morning Dispatch and is based in Philadelphia, Pennsylvania. Prior to joining the company in 2023, he worked in political risk consulting, helping advise Fortune 50 companies. He was also an assistant editor at Providence Magazine and is a graduate student at the University of Edinburgh, pursuing a Master’s degree in history. When Grayson is not helping write The Morning Dispatch, he is probably working hard to reduce the number of balls he loses on the golf course.

Please note that we at The Dispatch hold ourselves, our work, and our commenters to a higher standard than other places on the internet. We welcome comments that foster genuine debate or discussion—including comments critical of us or our work—but responses that include ad hominem attacks on fellow Dispatch members or are intended to stoke fear and anger may be moderated.