Blake Lemoine, Google, and searching for souls in the algorithm

Rate this post

It wasn’t science that convinced Google engineer Blake Lemoine that one of the company’s AIs is sensitive. Lemoine, who is also an ordained Christian mystical priest, says it was AI’s comments on religion, as well as his “personal and spiritual beliefs,” that helped. persuade he technology had thoughts, feelings and soul.

“I am a priest. When LaMDA claimed to have a soul and then was able to eloquently explain what it meant by that, I was inclined to give it the benefit of the doubt, ”Lemoine told a recent tweet. “Who am I to tell God where he can and where he can’t put souls?”

Lemoine is probably wrong, at least from a scientific perspective. Leading AI researchers, as well as Google, say that LaMDA, the conversational language model that Lemoine was studying at the company, is very powerful and advanced enough that it can provide extremely compelling answers to poll questions without really understanding what he says. Google suspended Lemoine after the engineer, among other things, hired a lawyer for LaMDA and began talking to the House Judicial Committee about the company’s practices. Lemoine alleges that Google is discriminating against him because of his religion.

Still, Lemoine’s beliefs have sparked a major debate and serve as a compelling reminder that as AI moves forward, people will have all sorts of distant ideas about what technology does and what it means to them.

“Because it’s a machine, we don’t tend to say,‘ It’s only natural for this to happen, ’Scott Midson, a liberal arts professor at the University of Manchester who studies theology and posthumanism, told Recode. supernatural, magical and religious “.

It’s worth noting that Lemoine isn’t the first Silicon Valley figure to make claims about artificial intelligence that, at least at first glance, sound religious. Ray Kurzweil, a prominent computer scientist and futurist, has long promoted “singularity,” which is the notion that artificial intelligence will eventually surpass humanity and that humans could merge with technology. Anthony Levandowski, who co-founded Google’s autonomous car startup, Waymo, started The Way of the Future, a church dedicated entirely to artificial intelligence in 2015 (the church disbanded in 2020). Even some practitioners of more traditional religions have begun to incorporate AI, including robots that distribute blessings and advice.

Optimistically, some people may find real comfort and wisdom in the answers offered by artificial intelligence. Religious ideas could also guide the development of AI and perhaps make technology ethical. But at the same time, there are real concerns about thinking of AI as something more than a human-created technology.

I recently spoke with Midson about these concerns. Not only do we run the risk of loving AI and losing sight of its real flaws, he told me, but also of allowing Silicon Valley’s effort to promote a technology that is still far less sophisticated than it seems. This interview has been edited for clarity and length.

Rebecca Heilweil

Let’s start with the great story that came out of Google a few weeks ago. How common is it that someone with religious opinions believes that artificial intelligence or technology has a soul, or that it is something more than just technology?

Scott Midson

While this story sounds really amazing (the idea of ​​religion and technology coming together), the early history of these machines and religion makes this idea of ​​religious motifs in computers and machines much more common.

If we go back to the Middle Ages, the medieval era, there were automata, which were basically automobiles. There is one automaton in particular, a mechanical monk, which was specially designed to encourage people to reflect on the complexities of God’s creation. His movement was designed to invoke this religious reverence. At the time, the world was seen as a complex mechanism and God as the great watch designer.

Jumping from the mechanical monk to another type of mechanical monk: very recently, a German church in Hesse and Nassau made BlessU-2 to commemorate the 500th anniversary of the Reformation. BlessU-2 was basically a glorified ATM that dispensed blessings and moved its arms and had this kind of big, religious and ritualized stuff. There were many contradictory reactions. One, in particular, was an elderly woman who said that, in fact, a blessing she received from this robot was really significant. It was an individual who had a meaning for her, and he said, “Well, in fact, something is going on here, something I can’t explain.”

Rebecca Heilweil

In the world of Silicon Valley and technology spaces, what kind of other similar claims have emerged?

Scott Midson

For some people, especially in Silicon Valley, there is a lot of publicity and money that can be linked to grandiose statements like, “My AI is conscious.” It draws a lot of attention. It activates the imagination of many people precisely because religion tends to go beyond what we can explain. It is this supernatural bond.

There are many people who will gladly fan the flames of these conversations to keep exaggerating. I think one of the things that can be quite dangerous is when this hype is not kept under control.

Rebecca Heilweil

From time to time, I will talk to Alexa or Siri and ask some important life questions. For example, if you ask Siri if God is real, the bot will answer, “It’s all a mystery to me.” There was also this recent example of a journalist asking GPT-3, the language model created by the AI ​​OpenAI research lab, on Judaism and see how good their answers could be. Sometimes the answers from these machines seem really useless, but other times they seem very wise. Why that?

Scott Midson

Joseph Weizenbaum designed Eliza, the world’s first chatbot. Weizenbaum did some experiments with Eliza, which was just a rudimentary chatbot, a language processing software. Eliza was designed to emulate a Rogerian psychotherapist, so basically your middle counselor. Weizenbaum did not tell participants that they were going to talk to a machine, but they were told that you would interact through a computer with a therapist. People would say, “I feel very sad for my family,” and then Eliza would pick up the word “family.” I would pick up certain parts of the sentence and almost return it as a question. Because this is what we expect from a therapist; there is no sense we expect from them. It’s that reflective screen, where a computer doesn’t need to understand what it says to convince us that it’s doing its job as a therapist.

This Recode reporter had a brief chat with a recreation of Eliza’s chatbot that is available on the web.

We have much more complex AI software, software that can contextualize words into sentences. Google’s LaMDA technology has a lot of sophistication. It’s not just about looking for a simple word in the sentence. You can contextually locate words in different types of structures and environments. So it gives you the impression that he knows what he is talking about. One of the key points around conversations around chatbots is, to what extent does the interlocutor, the machine we are talking to, really understand what is being said?

Rebecca Heilweil

Are there examples of robots that do not provide particularly good answers?

Scott Midson

There is a lot of caution about what these machines do and what they don’t do. It’s about how they convince you that they understand and that kind of thing. Noel Sharkey is a prominent theorist in this field. You don’t really like these robots convincing you that they can do more than they can actually do. He calls them “show bots.” One of the main examples he uses of the show’s robots is Sophia, the robot that has received honorary citizenship status in Saudi Arabia. This is more than a basic chatbot because it is in a robot body. It can be clearly said that Sophia is a robot, for no other reason than the fact that the back of her head is a transparent carcass, and all the wires and things can be seen.

For Sharkey, all of this is just an illusion. This is just smoke and mirrors. Sophia does not really guarantee the status of person by any stretch of the imagination. He doesn’t understand what he’s saying. He has no hopes, dreams, feelings or anything that makes him as human as he might seem. The fact is that fooling people is problematic. It has many swing-and-miss phrases. Sometimes it works poorly, or he says questionable things that raise eyebrows. But even where it is more transparent, we continue with a certain level of enthusiasm.

There are many times that robots have that “It’s a puppet on a rope.” He’s not doing as many independent things as we think. We’ve also had robots that go to witnesses. Pepper the robot went to a government witness on AI. It was a trial hearing session of the House of Lords, and it looked like Pepper was speaking for himself, saying all the things. Everything was pre-scheduled, and this was not entirely transparent to everyone. And again, it is this misunderstanding. It’s managing the hype that I think is the big concern.

Rebecca Heilweil

It reminds me a little bit of that scene The Wizard of Oz where the true magician is finally revealed. How does the conversation about whether or not AI is sensitive relate to the other important discussions that are taking place about AI at the moment?

Scott Midson

Microsoft Tay was another chatbot that was sent to Twitter and had a machine algorithm where it would learn from its interaction with people from the Twitter sphere. The problem is that Tay was trolled and in 16 hours had to be removed from Twitter because he was misogynistic, homophobic and racist.

How these robots, whether sensitive or not, do a lot in our image is another big set of ethical issues. Many algorithms will be trained in fully human datasets. They talk about our history, our interactions, and they are inherently biased. There are demonstrations of biased algorithms based on race.

The question of sensitivity? I can see it as a bit of red herring, but it’s actually also tied to how we make machines in our image and what we do with that image.

Rebecca Heilweil

Timnit Gebru and Margaret Mitchell, two prominent researchers in the ethics of AI, raised this concern before they were both fired by Google: thinking of the discussion of sensitivity and AI as something independent, we could lose sight of the fact that AI is created by humans.

Scott Midson

We almost see the machine in some way, as separate, or even as God, in some way. Going back to the black box: there’s something we don’t understand, it’s a kind of religion, it’s amazing, it has incredible potential. If we look at all these ads about these technologies, it will save us. But if we see it in this detached way, if we see it as a kind of God, what encourages us to do so?

This story was first published in the Recode newsletter. Register here so you don’t miss the next one!

Source link

Leave a Comment