AI “Art” Isn’t Art.

Image created by Lensa that shows an artist’s signature in the corner. Credit: @tinymediaempire on Twitter

In recent years (and even months), the public’s notion of what AI can do has stretched drastically. From the writings of ChatGPT to Deepfakes of our favorite celebrities, artificial intelligence is now in the hands of anyone who simply Google searches for it. Over the past couple of months, AI art programs like Lensa, Stable Diffusion, and DALL-E 2 have blown up in popularity. These programs easily produce images in response to the prompt they’ve been fed. Seems simple enough, so what’s the problem? 

With any type of AI, the program has to be “trained” and fed images, texts, and pretty much everything it can get its hands on so it can recognize the prompts asked of it. By using datasets, such as LAION-5B, programs are trained with billions of images and files from all corners of the internet. But some of these programs contain copyrighted artworks and artwork without the artists’ consent or knowledge. When creating an image, a person can specifically create a product in the style of an artist if the person tags the artist’s name.  This exact case happened to fantasy illustrator Greg Rutkowski, whose name was included in over 93,000 prompts in Stable Diffusion. Getty Images and numerous independent artists have filed lawsuits against companies like Stable Diffusion, Prisma Labs (the parent company of Lensa), and DeviantArt for copyright infringement.  

Another image a “botched” signature. Credit: @laurenipsum on Twitter

These programs also pose a threat of hyper-sexualization and the creation of explicit images. Oliva Snow from Wired Magazine found that Lensa sexualized her childhood photographs into “fully nude photos of an adolescent and sometimes childlike face but a distinctly adult body.” The app’s beautify option also tends to whiten and angelize women of color. Also, AI art generators aren’t subject to content moderation to detect child sexual exploitation material (CSEM). These programs aren’t required to report any CSEM content to the National Center for Missing and Exploited Children. Lack of enforcement of their policies against graphic and sexual images will place women and children at an unnecessary risk of harm. 

There’s also a moral aspect to the debate over AI art. Creating art is a form of human expression, but it’s also a craft people study for their entire lives— having a robot copy the artful expression of other artists takes away from the joy and experience that art creates. For example, asking a robot to generate a portrait of yourself or someone else is inherently devoid of human expression. The image may look stunning and human-made, but it’s not; it’s made by copying artistic styles, especially those of digital artists. These mass-produced images cheapen the idea of supporting small artists and their work. Supporters of AI claim that these programs take “influence” from other artists in the same way that humans do.  The future of AI (and our future with it) is definitely uncertain. But at the moment, let’s celebrate works by true artists and be mindful of how we use AI.