Internet art and image archives are already flooded with images developed with the use of artificial intelligence. Expect even more images of high imagination or photos of dubious origin now that the AI image generator that arguably started the current artificial image craze, DALL-E, is open and available to all.
In a Wednesday blog post, DALL-E developer OpenAI said it already has 1.5 million users creating more than 2 million AI-generated images a day. Using data and feedback, the company said they have made their filters stronger at rejecting any images made to emulate sexual, violent, or political content. There is no current API available for DALL-E, but apparently one’s in development.
The DALL-E section of the OpenAI website still asks users to sign up for a waitlist, as of the time of reporting, though there is now a sign up page. In an email statement, OpenAI said that from the start they have taken on “an iterative deployment approach to responsibly scale DALL-E, which has helped us discover ways it can be used as a powerful creative tool.”
Users who sign up get 50 free credits to create images during the first month, and then 15 free credits every month after that.
OpenAI’s image generator was first revealed in April and people quickly set themselves up on the waitlist, some twiddling their thumbs for months before they got their turn. Though DALL-E — named after famed artist Salvador Dali and written akin to Disney Pixar’s WALL-E — was the first system to make a real leap in the capabilities of AI image technology, other systems have caught up, at least in terms of popularity. Midjourney hosts hundreds of thousands of users on its Discord-based platform, and StabilityAI, the makers of AI art generator Stable Diffusion, has been in talks of raising millions on the back of its looser, more controversial system.
OpenAI’s announcement puts itself in a very strange place due to both AI arts burgeoning popularity but also the public pushback against it. The Washington Post talked with several OpenAI product heads while displaying how the software could be used to create images of fake protests, which would be against the company’s restrictions on creating political images. The system limits users prompts by triggering content warnings on words like “preteen” and “teenager.” At the same time, even though the system should restrict prompts based on public figures, the Post noted it still allowed users to generate images based on people like Mark Zuckerberg and Elon Musk.
And there’s still a major question of ownership. A tech exec became a figure of consternation for entering an AI-generated art piece into a local art competition and winning the top prize. Last week, an artist claimed she received the first copyright for work created using AI art, but the U.S. Copyright Office has stated they do not accept any work that was not created by human hands, meaning the question remains in limbo.
Of course, none of the most popular image generators have avoided controversy. Stable Diffusion has been cited for being used to generate child porn, but StabilityAI founder Emad Mostaque said they were working on systems to block such content. Company heads of StabilityAI and OpenAI have even gone back and forth which one of their systems is the least controversial.
I love how people make up shit just to make noise. To be clear there is no AI model that does more for ethical use and restricting misuse than @OpenAI #Dalle https://t.co/hTvKvt0GQQ pic.twitter.com/qVNkCGH81Y
— Abran Maldonado (@abran) September 22, 2022
Last week, OpenAI announced they were lifting restrictions that stopped users from uploading real human faces for the AI model to take a crack at editing. The company promised they created detection technology to stop users from abusing the system to create porn or violent content. Users were supposedly restricted from uploading photos of people’s faces without the subject’s consent. The company had previously opened up its systems to researchers looking to create artificial human faces.