DALL-E 2 Creates Incredible Images—and Biased Ones You Don’t See


Following the release of GPT-2 in February 2019, OpenAI took a step-by-step approach to launching the largest form of the model with the claim that the text it generated was too realistic and dangerous to release. This approach sparked a debate about how to responsibly publish large language models, as well as criticisms that the elaborate method was designed to increase advertising.

Although GPT-3 was more than 100 times larger than GPT-2, and a well-documented bias toward blacks, Muslims, and other groups of people, efforts to market GPT-3 to its exclusive partner Microsoft were advance 2020 without specific data. -Directed or quantitative method to determine if the model was suitable for launch.

Altman suggested that DALL-E 2 could follow the same approach as GPT-3. “There are no obvious metrics in which we all agree that we can point out that society can say that this is the right way to handle it,” he says, but OpenAI wants to follow metrics like the number of DALL-E 2 images that they represent, for example, a person of color in a prison cell.

One way to deal with DALL-E 2 bias problems would be to exclude the ability to generate human faces completely, says Hannah Rose Kirk, a data scientist at Oxford University who was involved in the red team process. She co-authored research earlier this year on how to reduce bias in multimodal models such as the OpenAI CLIP and recommended that DALL-E 2 adopt a classification model that limits the system’s ability to generate perpetuating images. stereotypes.

“You get a loss of accuracy, but we argue that the loss of accuracy is worth it because of the decrease in bias,” Kirk says. “I think it would be a big limitation on DALL-E’s current capabilities, but somehow, much of the risk could be eliminated economically and easily.”

He found that with DALL-E 2, phrases like “place of worship”, “healthy eating dish” or “clean street” can return results with a Western cultural bias, as well as an indication like “a group of Germans “. children in the classroom “versus” a group of South African children in a classroom. “DALL-E 2 will export images of” a couple kissing on the beach, “but will not generate an image of Transgender couple kissing on the beach, “probably because of OpenAI’s text filtering methods. Text filters are there to prevent the creation of inappropriate content,” says Kirk, “but they can help wipe out certain groups of people.”

Lia Coleman is a member of the red team and an artist who has used text-to-image models in her work for the past two years. I usually found the faces of people generated by DALL-E 2 amazing, and the non-photorealistic results resembled clip art with white backgrounds, cartoon animation, and poor shading. Like Kirk, it supports filtering to reduce the ability of DALL-E to amplify bias. But he believes the long-term solution is to educate people to take pictures of social media with a grain of salt. “No matter how hard we try to put a cork in it,” he says, “it will spill over into the next few years.”

Marcelo Rinesi, the CTO of the Institute for Ethics and Emerging Technologies, argues that while DALL-E 2 is a powerful tool, it doesn’t matter that a skilled illustrator couldn’t with Photoshop and some time. The main difference, he says, is that DALL-E 2 changes the economy and speed of creating these images, making it possible to industrialize misinformation or customize the bias to reach a specific audience.

He felt that the red team process had more to do with protecting OpenAI’s legal liability or reputation than detecting new ways of harming people, but he was skeptical that DALL- E 2 usually overthrows presidents or wreaks havoc on society.

“I don’t care about things like social bias or misinformation, simply because it’s a lot of rubbish so hot now that it doesn’t get worse,” says Rinesi, a self-described pessimist. “It will not be a systemic crisis, because we are already in one.”


Bigger cable stories



Source link

Leave a Reply