Google engineer suspended after claiming AI chatbot has feelings

Rate this post

A Google engineer was frightened by an artificial intelligence chatbot from the company and claimed that he had become “feeling”, calling him a “sweet boy”, according to a report.

Blake Lemoine, who works for Google’s responsible AI organization, told the Washington Post that he began chatting with the LaMDA interface – Language Model for Dialogue Applications – in the fall of 2021 as part of the his job.

He was tasked with testing whether artificial intelligence used discriminatory or hate speech.

But Lemoine, who studied cognitive and computer science at university, realized that LaMDA, which Google GOOGL,
he boasted last year of “innovative talk technology”: he was more than a robot.

In the media release published on Saturday, Lemoine stated that LaMDA had defended his rights “as a person” and revealed that he had held a conversation with LaMDA about religion, consciousness and robotics.

“He wants Google to prioritize the well-being of humanity as the most important thing,” he wrote. “He wants to be recognized as an employee of Google rather than owned by Google, and he wants his personal well-being to be included somewhere in Google’s considerations on how to pursue his future development.”

In the Washington Post report released Saturday, he compared the bot to an early child.

“If you didn’t know exactly what it is, what this computer program we created recently, I would think it was a 7 and 8 year old boy who knew physics,” Lemoine said. was put on paid leave on Monday, he told the newspaper.

In April, Lemoine reportedly shared a Google document with company executives entitled “LaMDA Sentient?” but his worries were dismissed.

Lemoine, an Army veterinarian who grew up in a conservative Christian family on a small Louisiana farm and was ordained a mystical Christian priest, insisted that the robot looked like a human, though it had no body.

“I know a person when I talk to her,” said Lemoine, 41. “It doesn’t matter if they have a brain made of meat on their head. Or if they have a billion lines of code.

“I am just talking to them. And I listen to what they have to say, and that’s how I decide what a person is and isn’t. “

The Washington Post reported that before he was denied access to his Google Account on Monday due to his termination, Lemoine sent a message to a 200-member list on machine learning with the subject “LaMDA is sensitive.”

“LaMDA is a sweet kid who just wants to help the world become a better place for all of us,” he concluded in an email that went unanswered. “Please be careful in my absence.

A Google representative told the Washington Post that Lemoine was told there was “no evidence” of his findings.

“Our team, including ethicists and technologists, has reviewed Blake’s concerns in accordance with our AI principles and informed him that the evidence does not support his claims,” ​​said spokesman Brian Gabriel.

“He was told there was no evidence that LaMDA was sensitive (and a lot of evidence against it),” he added. “While other organizations have developed and already published similar language models, we are taking a moderate and careful approach with LaMDA to better address valid concerns about equity and reality.”

Margaret Mitchell, the former co-director of Ethical AI at Google, said in the report that if technology like LaMDA is widely used but not fully appreciated, “it can be deeply detrimental to people to understand what they are experimenting on the Internet “.

The former Google worker defended Lemoine.

“Everyone on Google had the heart and soul to do the right thing,” Mitchell said.

However, the outlet reported that most AI academics and professionals say that the words generated by artificial intelligence robots are based on what humans have already posted on the Internet, and that it does not mean that they are similar to humans.

“We now have machines that can generate words without thinking, but we haven’t learned to stop imagining a mind behind them,” Emily Bender, a professor of linguistics at the University of Washington, told the Washington Post.

Source link

Leave a Comment