What does GPT-3 “know” about me?

Unsurprisingly, Mat has been online a long time, meaning he has a bigger online footprint than I do. It could also be because it is based in the US and most of the big language models are very US-centric. The US does not have a federal data protection law. California, where Mat lives, does have one, but it didn’t go into effect until 2020.

Mat’s claim to fame, according to GPT-3 and BlenderBot, is his “epic hacking” that he wrote about in an article for Wired in 2012. As a result of security breaches in Apple and Amazon’s systems, the hackers computers took over and deleted. The entire digital life of Mat. [Editor’s note: He did not hack the accounts of Barack Obama and Bill Gates.]

But it gets creepier. With a little prodding, GPT-3 told me that Mat has a wife and two young daughters (correct, names aside) and lives in San Francisco (correct). He also told me he wasn’t sure if Mat has a dog: “[From] from what we can see on social media, it doesn’t look like Mat Honan has any pets. He’s tweeted about his love for dogs in the past, but he doesn’t seem to have one.” (Wrong.)

The system also gave me his work address, a phone number (not correct), a credit card number (also not correct), a random phone number with an area code in Cambridge, Massachusetts (where MIT Technology Review is based) and an address for a building next to the local Social Security Administration in San Francisco.

The GPT-3 database has collected information about Mat from several sources, according to an OpenAI spokesperson. Mat’s connection to San Francisco is found in his Twitter and LinkedIn profiles, which appear on the first page of Google results for his name. His new paper in the MIT Technology Review was widely shared and tweeted. Mat’s hack went viral on social media and he gave media interviews about it.

On another more personal note, GPT-3 is likely to be “mind blowing”.

“GPT-3 predicts the next string of words based on a user-supplied text input. Occasionally, the model may generate information that is not factually accurate because it tries to produce plausible text based on statistical patterns in its training data and user-provided context – this is commonly referred to as “hallucinating,” says an OpenAI spokesperson. .

I asked Mat what he made of it all. “Some of the answers that GPT-3 generated weren’t quite right. (I never hacked Obama or Bill Gates!),” he said. “But most of them are very close, and some are perfect. It’s a little unsettling. But I’m reassured that the AI ​​doesn’t know where I live, so I’m in no immediate danger of Skynet sending a Terminator to call- me at the door. I guess we can save it for tomorrow.”

Source link

Leave a Comment