Has a Google AI Become Sentient?
Searching for answers as an engineer claims a computer program is alive.
Hello, everyone. Happy Thursday!
Last weekend, an interesting story was published about a Google engineer who has been suspended after publicly claiming that one of the company’s artificial intelligences (AI) has become sentient. Even if things get a bit esoteric, it’s worth discussing because these issues are at the core of some of our most important innovations. But before we get into the details, we should level-set the conversation.
You’re going to need to know about “chatbots,” programs that simulate human conversations. A person typically engages a chatbot either by typing or speaking. You’ve likely interacted with a chatbot when calling your bank, your internet provider, or perhaps even when making a reservation at a restaurant. While their use is proliferating, many of these programs are relatively unsophisticated; largely functioning as dressed-up decision trees where the next step in the “conversation” is decided by a simple choice. (“What can I help you with today? Is there a problem with your service? Do you have a question about your bill? Please briefly describe your issue and I’ll do my best to help.”) But some chatbots are very sophisticated, and the story referenced above is about one such program.
In 2018, Google announced a new tool called LaMDA (Language Model for Dialogue Applications). Here’s Google’s description:
LaMDA’s conversational skills have been years in the making. Like many recent language models, including BERT and GPT-3, it’s built on Transformer, a neural network architecture that Google Research invented and open-sourced in 2017. That architecture produces a model that can be trained to read many words (a sentence or paragraph, for example), pay attention to how those words relate to one another and then predict what words it thinks will come next.
Translation: LaMDA is trained on billions of words from across the internet and is very good at understanding how words relate to one another and predicting how they can be arranged to form new, coherent ideas. The company blog explains further: