Skip to content
Artificial Intelligence Chatbots Are Here to Stay
Go to my account

Artificial Intelligence Chatbots Are Here to Stay

ChatGPT has ushered in a new era for AI, though plenty of drawbacks remain.

In this photo illustration, the welcome screen for the OpenAI "ChatGPT" app is displayed on a laptop screen. (Photo by Leon Neal/Getty Images)

Buzzfeed is using them to make quizzes. Microsoft has invested billions of dollars in them even as it laid off 10,000 employees. Amazon is selling them to replace call centers. Artificial intelligence chatbots have been part of the business landscape for a while, but consumers should expect to see even more of them.

“I think it provides a promising approach for a kind of natural language interface to whatever you want,” University of California, Berkeley computer science professor Steven T. Piantadosi told The Dispatch. “If you want to be able to, I don’t know, book flights or order food or whatever just by talking to something, then it’s competent enough with language that it could form the kind of key interface between language and some other kind of system.”

By the end of 2022, 50 percent of respondents to a McKinsey and Company survey said they were using AI in their business in some form.

The big new player on the scene is ChatGPT, which parent company OpenAI launched as a prototype on November 30. (The “GPT” stands for “generative pre-trained transformer.”) The chatbot now has more than 30 million users, according to the New York Times, and it has made headlines for everything from passing a Wharton Business School MBA exam to emulating the deceased for conversations.. 

Last week, OpenAI began accepting registrations for a $20-per-month premium service called ChatGPT Plus, which promises faster responses and access to ChatGPT even during peak hours. OpenAI also already has produced another popular AI product, DALL-E, which creates artwork based on prompts. 

Google, meanwhile, has invested $400 million in a rival to OpenAI, Anthropic, which has its own chatbot that will be released later this year. The tech giant also announced it will be launching its own chatbot, Bard, in the coming weeks.

Chatbots like ChatGPT are trained to respond to user questions or commands based on statistical patterns they learn from enormous data sets. These include articles from the internet, books, and assorted webtexts (anything authored specifically for the web like a blog or social media post). Conversations with humans further help the machines fine tune their processes.

Even in its early form, ChatGPT has been incorporated into the workplace. Content platform Jasper told Context that 80,000 of its clients have used ChatGPT to write emails, blogs, and even marketing materials. 

For all its promise, AI chatbot tech still has drawbacks.

“It’s probably not a good idea to rely entirely on it because it’s just trained on texts from the internet and because of that there’s all kinds of garbage in there,” said Piantadosi, who also works with the UC Berkeley Computation and Language Lab. 

That garbage manages to find its way into what ChatGPT generates for users, even though Open AI has implemented broad rules against the bot creating sexist or racist content, or spreading misinformation.

OpenAI CEO Sam Altman acknowledged in a tweet that ChatGPT “has shortcomings around bias,” promising that the company was working on it. ChatGPT gives users the opportunity to give a thumbs up or thumbs down to responses to ChatGPT, and Altman has asked users to downvote examples of bias the chatbot churns out to help it learn what is and isn’t appropriate.

In an interview with Time, OpenAI CTO Mira Murati said her company is concerned with the question of  “How do you govern the use of AI in a way that’s aligned with human values?” But in an era of increasing partisan polarization, whose values should AI be imbued with and respect is a hot debate. Accusations of a liberal bias in ChatGPT persist. Users have found the bot will not say anything negative about drag queen story hours—events at library where drag queens read to children—and was willing to create a fictional narrative in which Hillary Clinton won the 2016 presidential election, but not one in which Donald Trump won the 2020 election. The bot also states criticism of gender reassignment treatment and surgery for teenagers is “harmful and discriminatory.”

This is not a problem unique to ChatGPT: Artificial intelligence needs to learn somewhere, and invariably the bias of those who train it or the sources it pulls from find their way into what a chatbot says. Tackling this issue remains a major challenge to true artificial intelligence. 

Will these artificial intelligences ever reach the point where they can distinguish truth from fiction or process information without tacitly adopting a particular point of view? That would require “a bit of a different approach from the companies who are building these models,” Piantadosi said.

“I think you have to be very careful about what the training data is,” he continued. “And you also have to have a model architecture which is trying to do more than just capture the statistical patterns, because those statistical patterns are going to be biased in all of the ways that people are biased.”

Alec Dent is a former culture editor and staff writer for The Dispatch.

Please note that we at The Dispatch hold ourselves, our work, and our commenters to a higher standard than other places on the internet. We welcome comments that foster genuine debate or discussion—including comments critical of us or our work—but responses that include ad hominem attacks on fellow Dispatch members or are intended to stoke fear and anger may be moderated.