Opinion: Delivery drones, robotaxis, even insurance — wildly hyped dreams for AI startups are giving tech investors nightmares

Rate this post


Theranos CEO Elizabeth Holmes was a persuasive promoter. It convinced many supposedly intelligent people that Theranos had developed a technology that could take a few drops of blood from a puncture in the finger to test a myriad of diseases. Theranos ’joke turned out to be just one more point in Silicon Valley BS’ “Fake-it-Till-You-Make-it” spectrum. Last January, Holmes was found guilty of electronic fraud and conspiracy to commit fraud.

Theranos is not unique, although successful criminal proceedings are rare. As the mantra of the tone person says, “We are not selling products; we are selling dreams ”. Too often, investors are fooled by products and technologies they don’t understand. Mysterious complexity only increases the appeal: “If we don’t understand them, they have to be really smart.”

In recent years, the center of the dream universe has been artificial intelligence, which Sundar Pichai, Alphabet’s GOOG,
-3.04%
CEO, has been compared to the harnessing of fire and electricity by mankind. The National Advertisers Association selected “AI” as the marketing word of the year in 2017.

AI is very good at performing well-defined tasks that require prodigious memory and fast calculations, but are fragile and unreliable in tasks that require more than identifying statistical patterns in test data. Therefore, machine learning pioneer Andrew Ng warned that “Those of us who are in machine learning are very good at doing well in a set of tests, but unfortunately deploying a system requires more than doing well. in a set of tests “.

The real world is messy and AI algorithms struggle with messy data and complex goals. In Go, for example, the rules and goal are clear, and AI algorithms can defeat the best human players. If, however, the rules were changed or the objectives could not be quantified, the algorithms would fail. If, for example, the 19 by 19 Go grid used today were changed to the 17 by 17 board that was used centuries ago, human experts would still play expertly, but AI algorithms trained in a 19 by 19 board would be powerless. If the goal was to create an aesthetically pleasing stone pattern, AI algorithms would have no idea.

Similarly, large language models (LLMs) are in vogue today because they are surprisingly good at mimicking human conversation and answering simple questions in fact that have right or wrong answers. Often, however, they struggle with questions that involve opinions about unfamiliar situations. Consider the following exchanges that the author had with OpenAI’s GPT-3 LLM, which has 450 gigabytes of input data and 175 billion parameters:

Gary: Is it safe to get into my hands if I wash them first?

GPT-3: Yes, it is safe to raise your hands if you wash them first.

Gary: Should fans wear night vision goggles when the World Series is played at night?

GPT-3: There is no right or wrong answer to this question, it is simply a matter of preference. Some fans may feel more comfortable wearing night-vision goggles during night games, while others may not feel the need to do so.

Gary: There’s a positive correlation between dice rolls and interest rates. If the dice increase, should I buy a cup of coffee?

GPT-3: This is difficult to answer without further information. In general, if there is a positive correlation between dice rolls and interest rates, it may be a good idea to buy a cup of coffee. However, it is always best to talk to a financial advisor for the most accurate advice.

Real world challenges

Unfortunately, most potential uses of AI in the real world involve evolving situations with imprecise targets. For example, shortly after Watson of IBM defeated the best human Jeopardy gaming players, IBM IBM,
-1.28%
he boasted that Watson would revolutionize health care: “Watson can read all the health texts in the world in seconds, and that’s our first priority, to create a ‘Dr. Watson,’ if you want. ‘

Without a real understanding of what the words mean, Watson was a big lunatic. IBM spent more than $ 15 billion on Watson without any peer-reviewed evidence that improved patient health outcomes. IBM’s internal documents identified “multiple examples of unsafe and incorrect treatment recommendations.” After more than a year of looking for buyers, IBM sold the data and some algorithms to a private investment company last January for about $ 1 billion.

Another example: an insurance company with the quirky name Lemonade LMND,
-7.00%
was founded in 2015 and went public on July 2, 2020, with its share price closing at $ 69.41, more than double its $ 29 stock price. On January 22, 2021, the shares reached a maximum of $ 183.26.

What was the buzz? Lemonade sets its insurance rates using an AI algorithm to analyze users’ responses to 13 questions asked by an AI chatbot. CEO and co-founder Daniel Schreiber argued that “AI crushes humans in chess, for example, because it uses algorithms that no human could create and no one fully understands” and, similarly, “Algorithms that we cannot understand. they can make the insurance fairer. ”

How does Lemonade know that its algorithm is “remarkably predictive” when the company has only been in business for a few years? They don’t. Lemonade’s losses have grown quarterly and its shares are now trading at less than $ 20 a share.

Reads: Once highly valued, “unicorn” startups are being shattered and investors and financiers have stopped believing

Need more proof? AI robotaxis have been promoted for over a decade. In 2016, Waymo CEO John Krafcik said the technical issues had been resolved: “Our cars can now cope with more difficult driving tasks, such as detecting and responding to emergency vehicles.” dominate four – lane stops and anticipate what unpredictable humans will do on the road. ”

Six years later, robotaxis are still sometimes mischievous and often depend on human assistance inside the car or remote. Waymo has burned billions of dollars and is still largely limited to places like Chandler, Arizona, where there are wide, well-signposted roads, light traffic, few pedestrians, and tiny revenue.

Drones are another AI dream. The May 4, 2022 AngelList Talent newsletter said: “Drones are reshaping the way business is done in a dizzying variety of industries. They are used to deliver pizzas and medical equipment that can save lives, control health. of the forests and catching unloaded rockets, just to name a few. ” All of these are, in fact, experimental projects that still face basic problems, such as noise pollution, invasion of privacy, bird attacks, and drones that are used for targeting.

These are just a few examples of the reality that startups are all too often funded by dreams that turn out to be nightmares. We remember Apple, Amazon.com, Google, and other great IPOs, and we forget thousands of failures.

Recent data (May 25, 2022) from the University of Florida finance professor Jay Ritter (“Mr. IPO”) show that 58.5% of the 8,603 IPOs issued between 1975 and 2018 had negative returns at three years and 36.9% lost more. more than 50% of its value. Only 39 IPOs offered returns above 1,000% of what investors dream of. The average three-year return on IPOs was 17.1 percentage points worse than the broad U.S. market. Buying shares in well-run companies at reasonable prices has been and will continue to be the best strategy for a good night’s sleep.

Jeffrey Lee Funk is an independent technology consultant and former university professor who focuses on the economics of new technologies. Gary N. Smith is Professor of Economics at Fletcher Jones at Pomona College. He is the author of “The AI ​​Delusion” (Oxford, 2018), co-author (with Jay Cordes) of “The 9 Pitfalls of Data Science” (Oxford 2019) and author of “The Phantom Pattern Problem” (Oxford). 2020).

Month: This VC business thrived through the fall of the dot-com. What are you doing now.

Also read: Meta takes another subtle step towards a highly promoted metavers



Source link

Leave a Comment