Are AI Experts Worried About Its Threat to Humanity?

Open AI CEO Sam Altman testifies Senate hearing on May 16, 2023. (Photo by Nathan Posner/Anadolu Agency via Getty Images)

At the end of May, Sam Altman of OpenAI, Sundar Pichai of Google, and other artificial intelligence (AI) leaders visited Washington. Shortly after, virtually every major news outlet ran a headline about how experts worry AI could eventually cause the extinction of humanity.

What should we make of such warnings?

What worries some experts?

A 2022 survey of recently published experts at two major AI conferences found 82 percent of them are concerned about risks from “highly advanced AI,” and 69 percent thought AI safety should be a greater priority. Still, the median expert judged AI has just a 15 percent chance of being on balance bad for humanity, including a 5 percent chance of leading to human extinction. Other surveys, including a 2016 survey of the same population and a 2013 survey of the top 100 most cited AI researchers, have produced comparable results.

Join to continue reading
Get started with a free account or join as a member for unlimited access to all of The Dispatch. Continue ALREADY HAVE AN ACCOUNT? SIGN IN