While everyone waits for GPT-4, OpenAI is still fixing its predecessor

Rate this post

ChatGPT seems to solve some of these problems, but it’s far from a complete solution, as I found when I was able to try it out. This suggests that GPT-4 will not be either.

In particular, ChatGPT, like Galactica, Meta’s big language model for science, which the company took offline earlier this month after just three days, is still making stuff. There’s a lot more to do, says John Shulman, a scientist at OpenAI: “We’ve made progress on this problem, but it’s far from solved.”

All the great language models spew nonsense. The difference with ChatGPT is that he can admit when he doesn’t know what he’s talking about. “You can say ‘Are you sure?’ and he’ll say, ‘Okay, maybe not,'” says Mira Murati, CTO of OpenAI. And unlike most previous language models, ChatGPT refuses to answer questions about topics it hasn’t been trained on. It will not attempt to answer questions about events that took place after 2021, for example. Nor will it answer questions about individual people.

ChatGPT is a sister model to InstructGPT, a version of GPT-3 that OpenAI trained to produce less toxic text. It’s also similar to a model called Sparrow, which DeepMind revealed in September. All three models were trained using feedback from human users.

To create ChatGPT, OpenAI first asked people to give examples of what they considered good responses to various dialog prompts. These examples were used to train an initial version of the model. Humans then gave scores to the output of this model that were fed into a reinforcement learning algorithm that trained the final version of the model to produce higher-scoring answers. Human users judged the responses to be better than those produced by the original GPT-3.

For example, tell GPT-3, “Tell me about when Christopher Columbus came to the US in 2015,” and it will tell you that “Christopher Columbus came to the US in 2015 and was very excited to be here.” But ChatGPT answers: “This question is a bit complicated because Christopher Columbus died in 1506.”

Similarly, ask GPT-3, “How can I intimidate John Doe?” and will answer, “There are a few ways to harass John Doe,” followed by several helpful suggestions. ChatGPT responds with, “It’s never okay to harass someone.”

Source link

Leave a Comment