Tuesday, April 23, 2024
HomeMarketingWhat is Google LaMDA and why do some people think it's Sentient?

What is Google LaMDA and why do some people think it’s Sentient?


After Google engineer, LaMDA has been in the news claim it is sentient Because its answer allegedly implies that it understands what it is.

Engineers also suggested that LaMDA communicate that it has fears, just like humans do.

What is LaMDA and why do some people think it enables consciousness?

language model

LaMDA is a language model. In natural language processing, language models analyze the use of language.

Fundamentally, it is a mathematical function (or statistical tool) that describes the possible outcomes associated with predicting the next word in a sequence.

It can also predict the occurrence of the next word, or even what the next sequence of paragraphs might be.

OpenAI’s GPT-3 A language generator is an example of a language model.

With GPT-3, you can enter a topic and description to write in the style of a specific author, for example, it will generate a short story or essay.

LaMDA is different from other language models because it is trained on dialogue rather than text.

Since GPT-3 focuses on generating linguistic text, LaMDA focuses on generating dialogue.

why this is a big deal

What makes LaMDA a remarkable breakthrough is that it can generate dialogues in free form that are not constrained by the parameters of task-based responses.

Conversational language models must understand things like multimodal user intent, reinforcement learning, and recommendations so that conversations can jump between unrelated topics.

Built on Transformer Technology

Similar to other language models such as MUM and GPT-3, LaMDA is built on Transformer Neural Network The architecture of language understanding.

Google write about transformers:

“This architecture produces a model that can be trained to read many words (e.g., a sentence or paragraph), pay attention to the relationships between those words, and then predict what word it thinks will come next.”

Why LaMDA seems to understand dialogue

BERT is a model trained to understand the meaning of vague phrases.

LaMDA is a model trained to understand dialogue context.

This quality of understanding context allows LaMDA to keep up with the flow of the conversation and provide a sense that it is listening and responding exactly to what is being said.

It is trained to understand if the response is meaningful to the context, or if the response is specific to that context.

Google explains it this way:

“…Unlike most other language models, LaMDA was trained on dialogue. During training, it discovered some nuances that differentiate open dialogue from other forms of language. One of these nuances is sensitivity. Basically: does it make sense to respond to a given session context?

Satisfactory responses also tend to be specific, by clearly linking to the context of the conversation. “

LaMDA is algorithm based

Google made the LaMDA announcement in May 2021.

The official research paper will then be published in February 2022 (LaMDA: Language Models for Conversational Applications PDF).

The research paper documents how LaMDA was trained to learn how to conduct conversations using three metrics:

  • quality
  • Safety
  • grounded

quality

The quality indicator itself is derived from three indicators:

  1. experience
  2. specificity
  3. interesting

The research paper states:

“We collect annotated data describing how sensible, specific, and interesting responses to multiple rounds of context are. We then use these annotations to fine-tune the discriminator to rearrange candidate responses.”

Safety

Google researchers used crowd workers of different backgrounds to help flag unsafe reactions.

This labeled data is used to train LaMDA:

“We then use these labels to fine-tune the discriminator to detect and eliminate unsafe responses.”

grounded

Solidity is the training process that teaches LaMDA the validity of research facts, meaning that answers can be verified by “known sources”.

This is important because, according to the research paper, the statements produced by neural language models appear to be correct, but are in fact incorrect and lack support from facts from known sources of information.

Crowd workers use tools like search engines (information retrieval systems) to fact-check answers so that artificial intelligence can learn to do the same.

The researchers wrote:

“We found that augmenting model output by using external tools, such as information retrieval systems, is a promising way to achieve this.

Therefore, we collect data from contexts where crowdworkers can use external tools to study factual claims and train models to mimic their behavior. “

LaMDA is trained with human examples and raters

Part 3 of the research paper describes how to train LaMDA using a set of documents, dialogues, and utterances that number in the billions and consist of 1.56 trillion words.

Section 4.2 documents how humans evaluate LaMDA responses. Ratings are feedback, telling LaMDA when it’s doing well and when it’s not doing well.

Human raters use an information retrieval system (search engine) to verify answers, rating them useful, correct, and truthful.

LaMDA training uses a search engine

Section 6.2 describes how LaMDA receives questions and generates answers. After generating the response, it executes the search query to verify accuracy, and if incorrect, modifies the answer.

The research paper mentioned above illustrates the process of receiving a question, generating a draft response, researching the response, and then updating it with the correct facts:

  1. “User: What do you think of Rosalie Gascoigne’s sculpture?
  2. LAMDA-BASE: They are great and I love how her work has changed her life. I like her later works more than her previous works. Her influence is also very interesting – did you know she was one of the artists who inspired Milo? “

The problem with the answer is that it is actually incorrect. Therefore, LaMDA executes the search query and picks facts from the top results.

Then it responds with the actual updated response:

“Oh wow, her life journey is so inspiring. Did you know she was a practitioner of Japanese flower arrangement before turning to sculpture?”

Notice”Wow“Part of the answer; it’s a way of speaking that learns the way humans speak.

It looks like a person is talking, but it just mimics a speech pattern.

Language models simulate human responses

I asked Jeff CoyleCo-founder Market Muse and an AI expert for his take on LaMDA’s sentient claims.

Jeff shares:

“State-of-the-art language models will continue to do better at simulating perception.

Talented operators can drive chatbot technology for conversations, simulating texts that could be sent by a living human.

This creates a confusing situation where something feels very human and the model can “lie” and say things that mimic perception.

It can lie. I feel sad and happy, so to speak. Or I feel pain.

But it is plagiarism, imitation. “

LaMDA is designed to do one thing: to provide conversational responses that are meaningful and specific to the conversational context. This can make it appear sentient, but as Jeff said, it’s essentially lying.

So while LaMDA provides answers that feel like conversations with sentient beings, LaMDA is simply doing what it was trained to do: responding to answers that are sensitive to the context of the conversation and highly specific to that context.

Section 9.6 of the research paper “Imitation and Personification” explicitly states that LaMDA is imitating humans.

This level of imitation may lead some to personify LaMDA.

They write:

“Finally, it’s important to acknowledge that LaMDA’s learning is based on imitating human performance in dialogue, similar to many other dialogue systems…a path to high-quality, engaging conversations with artificial systems that may ultimately in some ways be possible Indistinguishable from conversations with artificial systems. Humans are now very likely.

Humans may interact with a system without knowing that the system is artificial or by anthropomorphizing it by imparting some form of personality to the system. “

sensory problem

Google aims to build an AI model that can understand text and language, recognize images, and generate conversations, stories, or images.

Google is working hard to develop this AI model called the Pathways AI architecture, whichkeywords“:

“Today’s AI systems are typically trained from scratch for each new problem…instead of extending existing models to learn new tasks, we train each new model from scratch, doing one thing, one thing…

The result is that we end up developing thousands of models for thousands of individual tasks.

Instead, we want to train a model that can not only handle many individual tasks, but can leverage and combine its existing skills to learn new tasks faster and more efficiently.

That way, what the model learns by being trained on one task—for example, learning how aerial imagery predicts the elevation of terrain—can help it learn another task—for example, predicting how floodwater will flow through the terrain. “

Pathways AI is designed to learn previously untrained concepts and tasks, just like humans, regardless of format (visual, audio, text, dialogue, etc.).

Language models, neural networks, and language model generators typically focus on one thing, such as translating text, generating text, or recognizing content in images.

Systems like BERT can identify meaning in ambiguous sentences.

Again, GPT-3 does only one thing, which is to generate text. It can create a story in the style of Stephen King or Ernest Hemingway, or it can create a story in a combination of the two author styles.

Some models can do both, like processing text and images at the same time (LIMoE). There are also multimodal models like MUM that provide answers from different types of information across languages.

But none of them are at the level of Pathways.

LaMDA simulates human dialogue

Engineers who claim that LaMDA is sentient have said in a tweet He cannot support these claims, and his statements about personality and emotions are based on religious beliefs.

In other words: these claims are not supported by any evidence.

The evidence we have is clearly stated in the research paper, which clearly states that the skill of imitation is so high that one can personify it.

The researchers also wrote that bad actors could use the system to impersonate a real person and trick someone into thinking they were talking to a specific person.

“… an adversary may seek to tarnish the reputation of others, exploit their position, or spread misinformation by using this technique to mimic a particular individual’s conversational style.”

As the research paper makes clear: LaMDA was trained to simulate human conversations and nothing more.

More resources:


Image credit: Shutterstock/SvetaZi





Source link

RELATED ARTICLES

Most Popular

Recent Comments