Why Meta’s latest large language model only survived three days online

The Meta team behind Galactica argues that language models are better than search engines. “We believe this will be the next interface of how humans access scientific knowledge,” the researchers write.

This is because language models can “potentially store, combine and reason about” information. But this “potentially” is crucial. It is a hard-coded admission that language models cannot yet do all these things. And they may never be able to.

“Language models have no knowledge beyond their ability to capture patterns of word strings and spit them out in a probabilistic way,” says Shah. “It gives a false sense of intelligence.”

Gary Marcus, a cognitive scientist at New York University and a vocal critic of deep learning, gave his opinion in a Substack post titled “A Few Words About Shit,” saying that the ability of large language models to mimic human written text is nothing. rather than “a superlative feat of statistics.”

And yet Meta isn’t the only company championing the idea that language models could replace search engines. For the past two years, Google has been promoting its PaLM language model as a way to search for information.

It’s a tempting idea. But to suggest that the human text generated by these models will always contain reliable information, as Meta seemed to do in its Galactica promotion, is reckless and irresponsible. It was an unforced error.

Source link

Leave a Comment