It’s only been a day since ChatGPT’s new AI image generator went live, and social media feeds are already flooded with AI-generated memes in the style of Studio Ghibli, the cult-favorite Japanese animation studio behind blockbuster films such as “My Neighbor Totoro” and “Spirited Away.”
In the last 24 hours, we’ve seen AI-generated images representing Studio Ghibli versions of Elon Musk, “The Lord of the Rings“, and President Donald Trump. OpenAI CEO Sam Altman even seems to have made his new profile picture a Studio Ghibli-style image, presumably made with GPT-4o’s native image generator. Users seem to be uploading existing images and pictures into ChatGPT and asking the chatbot to re-create it in new styles.
Ok, now I’ve finally come to a conclusion about this debate. When a human learns to draw or write in a particular style, there are no copyright issues. However, when a machine does the same, you need to compensate the people who made the training data. Here’s why.
The training data is an essential component of of the model. It’s like building a house with bricks you didn’t pay for. If you’re building something like a house, ship, software or a machine learning model, you need to pay for the materials that are required to build it.
I agree with tackling this issue intuitively because humans like other animals have a basic sense of injustice and its setting all kinds of alarms right now. We have already dealt with this - it’s called fair use. Machine processing of someone else’s art for commercial purposes will never be a fair use.
I’d like to add that machine learning is not learning, just like a network firewall is not a wall and doesn’t protect against fire. Lending the same legitimacy to machine learning than to true learning is an equivocation, a fallacy.
It’s even simpler than that: In the first instance a human learned a thing. In the second instance a bunch of humans wrote software to ingest art and spit out some Frankenstein of it. Software which is specifically designed to replace artists, many of whom likely had art used as inputs to said software without their consent.
In both cases humans did things. The first is normal, the second is shitty.
Our current AIs are kinda pathetic, and might realistically only replace mediocre artists. However, people who buy art, can’t tell the difference between good art and mediocre art, so the financial impact could be felt by a larger number of people.
It’s a bit like comparing factory made clothes to properly tailored ones. We still have both, but machines have clearly won this race. Besides, only very few people appreciate tailored clothes so much that they are also willing to pay for them. Most don’t, so they wear cheap lower quality clothes instead. I think the same will happen to music and paintings too.