Google Has a Plan to Stop Its New AI From Being Dirty and Rude


CEO of Silicon Valley they tend to focus on the positives when announcing the next big thing in their business. In 2007, Apple’s Steve Jobs praised the “revolutionary user interface” and “innovative software” of the first iPhone. Google CEO Sundar Pichai took a different stance at his company’s annual conference on Wednesday when he announced a beta test of the “most advanced conversational AI to date.”

Pichai said the chatbot, known as LaMDA 2, can talk about any topic and had worked well in testing with Google employees. He announced an upcoming app called AI Test Kitchen that will make the bot available to outsiders. But Pichai added a strong warning. “While we’ve improved security, the model can generate inaccurate, inappropriate, or offensive responses,” he said.

Pichai’s hesitant tone illustrates the mixture of excitement, perplexity, and worry that revolves around a series of recent advances in the capabilities of language-learning machine learning software.

Technology has already improved the power of autocomplete and web search. It has also created new categories of productivity applications that help workers generate fluent text or programming code. And when Pichai first released the LaMDA project last year, he said it could be put into operation within Google’s search engine, virtual assistant, and workplace applications. However, despite all this dazzling promise, it is unclear how to reliably control these new AI wordmakers.

Google’s LaMDA, or Language Model for Dialog Applications, is an example of what machine learning researchers call a great language model. The term is used to describe software that generates a statistical sense of language patterns by processing large volumes of text, usually of online origin. LaMDA, for example, was initially trained with more than a trillion words from online forums, question and answer sites, Wikipedia, and other websites. This large amount of data helps the algorithm perform tasks such as generating text in different styles, interpreting new text, or acting as a chatbot. And these systems, if they work, will look nothing like the frustrating chatbots you use today. Right now, Google Assistant and Amazon’s Alexa can only perform certain pre-programmed tasks and are diverted when they are presented with something they don’t understand. What Google is proposing now is a computer to talk to.

Chat records posted by Google show that LaMDA can at least be informative, thought-provoking, or even fun. The chatbot test prompted Google Vice President and AI researcher Blaise Agüera y Arcas to write a personal essay last December arguing that technology could provide new insights into the nature of language and intelligence. “It can be very difficult to shake the idea that there is a ‘who’, not a ‘this’, on the other side of the screen,” he wrote.

Pichai made it clear when he announced the first version of LaMDA last year, and again on Wednesday, that he sees it can offer a path to much wider voice interfaces than the often frustratingly limited capabilities of services like Alexa, Google Assistant the Syrians of Apple. Now Google’s leaders seem to be convinced that they may have finally found a way to create computers that can be truly talked about.

At the same time, great language models have been shown to be fluent in dirty, nasty, and simple racist speech. Removing billions of text words from the web inevitably sweeps away a lot of nasty content. OpenAI, the company behind the GPT-3 language generator, has reported that its creation can perpetuate stereotypes about gender and race, and calls on customers to implement filters to remove nasty content.



Source link

Leave a Reply