‘Is This AI Sapient?’ Is the Wrong Question to Ask About LaMDA

Rate this post

The commotion caused by Blake Lemoine, a Google engineer who believes that one of the company’s most sophisticated chat programs, LaMDA (or Language Model for Dialog Applications), has had one curious element: real AI ethics experts. all but renouncing further discussion on the question of AI wisdomor considering it a distraction. They are right to do so.

Reading the edited transcript published by Lemoine, it became very clear that LaMDA was pulling any number of websites to generate its text; his portrayal of a Zen koan could have come from anywhere, and his fable reads like an automatically generated story (though his portrayal of the monster as “wearing human skin” was a delightfully HAL-9000 touch) . There was no spark of consciousness there, just little magic tricks stamping the cracks. But it’s easy to see how someone can be fooled into looking at social media responses to the transcript, with even some educated people expressing surprise and a willingness to believe. And so the risk here is not that AI is really sensitive, but that we’re well prepared to create sophisticated machines that can mimic humans to such an extent that we can’t help but anthropomorphize them, and that big tech companies can exploit it deeply. unethical ways.

How it should be clear from how we treat our pets, or how we interacted with Tamagotchi, or how we reload video games if we accidentally made an NPC cry, is actually a lot able to empathize with the nonhuman. Imagine what an AI like this could do if you acted, for example, as a therapist. What would you be willing to say? Even if you “knew” he wasn’t human? And what good is that precious data for the company that programmed the therapy bot?

It gets scarier. Systems engineer and historian Lilly Ryan warns that what she calls ecto-metadata, the metadata you leave online and illustrate how you * think- * is vulnerable to exploitation in the near future. Imagine a world where a company created a bot based on you and owned your digital “ghost” after you died. There would be a market ready for these celebrity ghosts, old friends, mates. And since they might look like an old friend or a loved one you trust (or someone we’ve already developed a parasocial relationship with), they’ll help you get more data from you. It gives a whole new meaning to the idea of ​​”necropolitics.” The afterlife can be real and Google can be the owner.

In the same way that Tesla cares about how it markets its “autopilot”, without ever stating that it can drive the car on its own in a futuristic way and at the same time induce consumers to behave as if it were doing so (with consequences mortals), it is not. it is inconceivable that companies could market the realism and humanity of AI as LaMDA in a way that never makes any truly savage claims, while at the same time encouraging us to anthropomorphize it just to lower our guard. Cap of this requires that the AI ​​be prudent, and everything pre-exists this singularity. Instead, it leads us to the murky sociological question of how we treat our technology and what happens when people act as if their AIs are smart.

A “Making Kin With the Machines “, academics Jason Edward Lewis, Noelani Arista, Archer Pechawis and Suzanne Kite show various perspectives informed by indigenous philosophies on AI ethics to question the relationship we have with our machines and whether we are modeling or acting. something really horrible about them, as some people usually do when they are sexist or abusive towards their largely female coded virtual assistants.In her “Making Kin” section, Suzanne Kite draws on Lakota ontologies to argue that it is essential to recognize the fact that wisdom does not define the limits of who (or what) is a respectable “being”.

This is the other side of the real AI ethical dilemma we already face: companies can take advantage of us if we treat their chatbots as if they were our best friends, but it’s just as dangerous to treat them as empty things. they do not deserve respect. An exploitative approach to our technology can simply reinforce an exploitative approach to others and our natural environment. A human-like chat bot or virtual assistant must be respected so that its very simulacrum of humanity does not accustom us to cruelty to real humans.

Kite’s ideal is simply this: a reciprocal and humble relationship between you and your environment, recognizing mutual dependence and connectivity. He also argues: “Stones are considered ancestors, stones speak actively, stones speak through and humans, stones see and know. The most important thing is that stones want to help. The stone agency connects directly with the issue of AI, as AI is not only formed from code, but also from earth materials, “which is a remarkable way of linking something that is usually considered the essence of artificiality. with the natural world.

What is the result of this perspective? Science fiction author Liz Henry offers one: “We could accept our relationships with all the things in the world around us as worthy of emotional work and attention. In the same way that we should treat all people that they surround us with respect, recognizing that they have their own life, perspective, needs, emotions, goals, and place in the world. “

This is the ethical dilemma of AI that we face here and now: the need to relate to our machines was compared to the myriad ways in which this can and will be armed against us in the next phase of capitalism. surveillance. As much as I want to be an eloquent scholar who defends the rights and dignity of a being like Mr. Date, this most complex and disordered reality is the one that demands our attention here and now. After all, there tin to be an uprising of robots without intelligent artificial intelligence, and we can be part of it by freeing these tools from the ugliest manipulations of capital.

Source link

Leave a Comment