LaMDA and the Sentient AI Trap

Rate this post

Now head of the non-profit organization Distributed AI Research, Gebru hopes that in the future people will focus on human welfare, not the rights of robots. Other AI ethics experts have said they will no longer do so discuss conscious or superintelligent AI Absolutely not.

“There’s a pretty big gap between the current AI narrative and what it can really do,” says Giada Pistilli, an ethicist for Hugging Face, a language model-focused startup. “This narrative simultaneously provokes fear, surprise and excitement, but is based primarily on lies to sell products and take advantage of the hype.”

The consequence of speculation on sensitive artificial intelligence, he says, is a greater willingness to make claims based on subjective impression rather than scientific rigor and proof. He is distracted by the “countless ethical and social justice questions” posed by AI systems. While every researcher is free to investigate what she wants, she says, “I’m just afraid that focusing on this topic will make us forget what’s going on while we look at the moon.”

What Lemoire experienced is an example of what author and futurist David Brin has called the “robot empathy crisis.” At an AI conference in San Francisco in 2017, Brin predicted that within three to five years, people would claim that AI systems were sensitive and would demand that they have rights. At the time, I thought these appeals would come from a virtual agent who took on the appearance of a woman or a child to maximize the human empathic response, not “some kind of Google,” she says.

The LaMDA incident is part of a transition period, says Brin, where “we will be increasingly confused about the boundary between reality and science fiction.”

Brin based his 2017 prediction on advances in language models. Expect the trend to lead to scams from here. Decades ago, he says, if people were a fool of a chatbot as simple as ELIZA, how hard would it be to persuade millions that an emulated person deserves protection or money?

“There’s a lot of snake oil out there and mixed with all the hype are genuine advances,” Brin says. “Analyzing our path through this stew is one of the challenges we face.”

And as empathetic as LaMDA seemed, people who are surprised by the great language models should consider the case of the stabbing of the cheeseburger, says Yejin Choi, a computer scientist at the University of Washington. In a local news broadcast in the United States, a teenager from Toledo, Ohio, stabbed his mother in the arm in a dispute over a cheeseburger. But the headline “stabbing cheeseburger” is vague. Knowing what has happened requires a little common sense. Attempts to get OpenAI’s GPT-3 model to generate text using “Breaking News: Cheeseburger Stabbing” produces words about a man who was stabbed with a cheeseburger in a sauce altercation of tomato and a man who was arrested after stabbing a cheeseburger.

Linguistic models are sometimes wrong because deciphering human language can require multiple forms of common sense comprehension. To document what great language models are capable of and where they can fall short, last month more than 400 researchers from 130 institutions contributed to a collection of more than 200 tasks known as BIG-Bench or Beyond the Imitation Game. BIG-Bench includes some traditional types of tests of language models such as reading comprehension, but also logical reasoning and common sense.

Researchers from the Allen Institute for AI’s MOSAIC project, which documents the common-sense reasoning skills of AI models, contributed a task called Social-IQa. Language models, not including LaMDA, were asked to answer questions that required social intelligence, such as “Jordan wanted to tell Tracy a secret, so Jordan leaned toward Tracy. Why did Jordan do that?” The team found that large language models achieved 20 to 30 percent less accurate performance than people.

Source link

Leave a Comment