Every time I see someone comparing the process of Artificial Intelligence (AI) art generation to inspiration, I am infuriated to no end. To quote “John Rosemond: Why you shouldn’t high-five a child,” “Arrrrrrggggggggghhhhhhh! Will you please just stop doing that, please? Every time I see it, I want to scream, and I’m not an emotionally hyperactive person.” While Rosemond had far more niche (and, frankly, wrong) opinions in his article, his words of anger hold up in matters that are far more frustrating.
I will admit that AI art generation is a process that I only sort of understand. Large AI art generators such as Midjourney, DALL-E 2 and Stable Diffusion were created through the assemblage of billions of images, which have text descriptors attached to them. When someone inputs text, these images that match this text are sort of blended together, generating a new image in a matter of seconds. Put a little more simply, if you input “dog with sunglasses,” an AI will find the broad similarities in pictures of dogs, pictures of sunglasses and pictures of dogs wearing sunglasses, and then will attempt to fill in the fine details as best it can—which can often produce pretty accurate results.
My knowledge of AI art generation is partly limited because it is an incredibly complicated process, meaning it was maybe inevitable for people to compare parts of the AI image generation process to human experiences, such as considering the AI’s image sources as pieces of inspiration for its art. In a similar vein, whenever someone touts the large amount of image data that these AIs rely on, inevitably the words “training” or “learning” come up, as if DALL-E 2 is hitting the weight room every morning or is flipping through flash cards the night before a big test.
It’s not necessarily an unhelpful metaphor to compare AI art to human experiences like inspiration, but I’ve found that it’s often a misdirection for people to think of AI art in these terms. Tech companies would like you to find some humanity in their products; a big ugly Amazon container then becomes “Rafa” and art generators become artists themselves.
I want to be clear to a pretty obvious extent: AI art programs are not human—not even a little bit. They do not approach art in the same way as humans. The generation process isn’t like hip-hop sampling, and it’s not like collaging. It is a much less poetic and much more unnerving process.
I mentioned earlier that AI art generation can be quite accurate, but there are a lot of things that AI art, in my experience, seemingly always gets wrong. Generative art has trouble generating the right number of fingers on people, generating any sort of text feature in an image or properly articulating basic human or animal anatomy. As Toward Data Science explores, inputting “circle” into Midjourney creates images that are anything but simple. While these might appear as laughable quirks, they point to the ways that AI art is significantly different from human art. These are somewhat trivial examples, but they hint at much more insidious instances. There are many ways in which human-centric rhetoric, especially in the case of inspiration, lets AI companies off the hook for copyright issues and excuses them from the moral implications of generative art.
When a person makes art inspired by an image and then sells it, they usually aren’t liable to copyright infringement, especially if they have at least some conception of copyright law. When an AI is “inspired,” it often makes images that point very clearly to their source material. According to The Verge, images on Stable Diffusion occasionally appear with a wobbly but distinct Getty Images watermark (a company which is in the process of suing Stable Difusion). In the case of the AI art app Lensa, generated images often include scrawled signatures in the corner, mimicking real artist signage from the source images. Generic inputs, such as my aforementioned “dog with sunglasses” example, will probably not create images that violate copyright law, but hyper-specific input that utilizes a specific artist’s style is much more likely to produce images that break copyright law. The reality is, though, that due to much of the complications of the generative art process, no one really knows how a court would rule until it happens, per The Verge.
A recent lawsuit filed by a trio of artists against major AI art companies has shed light on this issue as well as the many frustrations artists are having with the medium, per NPR. For the artists in this lawsuit and many more artists, their art is fed into a machine without their consent, and that machine will inevitably take revenue from them. Ignoring future copyright rulings for a second, why would a newspaper pay for an illustrator when they can input text into a generative art model and tell it to make an image in the style of that illustrator for far less money and in a fraction of the time?
Outside of commercial interests, someone could generate offensive images in an artist’s style or they could generate art that an artist simply doesn’t agree with. People have always had the ability, to some extent, to copy someone else’s art or remix it, but generative art has made this process so much easier. While an artist always has the option to innovate their style or change it completely—something AI can’t as easily exploit—the production of generative images made in an artist’s style creates a scenario where an artist is still constantly competing against their former art and can never feel comfortable staying with a particular style. If people treat the AI art generation process simply as being inspired by its source images, they will completely miss the point about its unprecedented power over artists that contribute to its existence. If AI art companies provided the option for artists to opt out their work or gave artists financial compensation when their work is used in image generation, artists might feel more comfortable with the new medium. But for most companies, neither of these amendments are likely to be made, especially without legal rulings.
National legislation that could possibly curb AI’s ability to dominate artists and flaunt copyright law will inevitably lag far behind the advances of AI, partly because of our representatives’ failure to understand basic recent technology. It might be relevant to remember, between the viral moments of Mark Zuckerberg sipping water, quite a few Congress members who spoke during that 2018 Congressional hearing who were completely unaware of how a major social media network like Facebook generates money, per Vox.
Despite all of this, the saddest part about AI art for me is that it feels like this should be the moment to celebrate an incredible piece of technology for creativity, or at the minimum, a funny internet tool. But instead, because of what seems like incredibly myopic thinking from a plethora of tech startups, we’re left with pretty much all of the possible trappings of a software that depends on large swaths of copyrighted data. It’s hard for me not to view AI art generators as this large, shadowy creature of unknown size or shape, existing in a strange space between traditional art tools and anything remotely human.
[DISCLAIMER: I generated these accompanying images either while gathering info for this article or while working on a project for a digital art class in the fall semester. They are in no way intended to substitute the work of the Misc Graphics staff.]