Last month, The Technoskeptic crossposted Gary Marcus demonstrating how current AI is not truly generative, but rather a thin veneer slapped on copyrighted work while claiming it is not copyrighted.
AI companies and investors have been balking at licensing original works because it would be very financially inconvenient, per Chris Middleton's Diginomica article in November 2023. Middleton recounted statements of AI venture capitalists Andreessen Horowitz.
“‘..another part of the AI industry said the quiet part out loud this month. In a written note, VC investors Andreessen Horowitz (A16z) went as far as claiming that the billions of dollars plowed into AI companies have been “premised on an understanding that, under copyright law, any copying necessary to extract statistical facts is permitted…Under any licensing framework that provided for more than negligible payment to individual rights holders, AI developers would be liable for tens or hundreds of billions of dollars in a year in royalty payments.’
Yup, there it is, folks: in black and white. The context for those comments was the US Copyright Office’s own call for evidence, during which OpenAI, Meta, and others, have admitted they would have to figure out how to pay copyright holders. This would impose an intolerable burden on them, they claimed.’”
If you listen to AI companies, the only way AI can be built is if AI companies are allowed carte blanche to steal copyrighted art, text, and everything else to train their models. They are not only not compensating past or current creators, the AI they are building is designed to put future creators out of work.
Every bit of financial value from creative work gets funneled into Silicon Valley.
After all, why SHOULD people with the creative impulse be immune from having their unique work shoved into the greedy maw of AI companies’ training data sets, merely because they made it and don’t agree to it being used that way? How Luddite is it NOT to allow multi-billion dollar companies to steal work they COULD pay to license, but don’t want to? (Btw, OpenAI used to make public what data their LLMs were being trained on. Probably just a coincidence that OpenAI stopped doing that a while ago.)
While the legal fight to stop AI companies’ IP looting spree is a worthy one, nobody who has created something and doesn’t want it stolen is thrilled at the idea of trusting lawyers fighting over IP to retain the rights to their own work. Not when AI companies and their VC investors like Andreessen Horowitz have an almost bottomless well of money to hire lawyers and bribe or intimidate (aka supporting the opponent in a primary election) politicians.
Now thanks to some very clever folks at the University of Chicago, there are two tools available, Glaze and Nightshade, to give visual artists a tool to fight back. Artists can take their work and run it through Glaze. It protects the artists’ work from mimicry by making changes that are indistinguishable to the naked eye—but what the voracious AI algorithms ingesting unlicensed work see is a very different image.
“…a prompt that asks for an image of a cow flying in space might instead get an image of a handbag floating in space.”
(other than a pretty cool name, obviously!)
The more recent tool, Nightshade, functions similarly, but rather than an individual defensive measure against mimicry, the University of Chicago creators envision it as a tool for collective offensive action.
“Nightshade is an offensive tool that artists can use as a group to disrupt models that scrape their images without consent (thus protecting all artists against these models). Glaze should be used on every piece of artwork artists post online to protect themselves, while Nightshade is an entirely optional feature that can be used to deter unscrupulous model trainers.”
The more Nightshade is used to subtly poison training data, the less usable the AI models built on scraped IP will be. Like few other things in this world, the justice of Nightshade is perfect: AI companies can easily get usable training data by paying for the work they’re currently stealing. And if they opt not to, and their models happen to ingest massive quantities of poison because model trainers are addicted to theft, who is to blame?
The war for IP creators to protect their work has started. Glaze and Nightshade are a shield and sword for artists, two free tools that don’t require creators to walk into a lawyer’s office and turn over the cash equivalent of the GDP of a small country to try to retain rights to their work.
If Technoskeptic readers have any visual artists in their lives, those creators might appreciate learning what Glaze and Nightshade are and what they can do. Think of it is as a reverse fight club.
“The first rule of Nightshade is: tell every artist about Nightshade.”
For a book cover I played around with an art generator to produce a simple image, just silhouettes of two people. I had to iterate and tweak my prompt and settings a lot in order not to get 3D images, non-silhouetted ones, some totally different than what I was after. One even hallucinated an extra body part. It took a long time and nothing useful was produced. So I gave up and told my daughter to try it, and she rendered the prompt I texted her in less than an hour on her iPad. One more iteration and we were done.
And it pleases me greatly to credit my kid instead of that stupid software.
This is great to know. See a lot of examples of stolen AI images compared to originals at https://www.nytimes.com/interactive/2024/01/25/business/ai-image-generators-openai-microsoft-midjourney-copyright.html (paywalled).
One thing few articles about AI art discuss is how the ability to verbally specify a picture will affect artists themselves, besides the IP issues. IMO it will make many people call themselves artists, flooding the market with images that haven't taxed their issuers creativity and technique. Making art will become the same as wandering in an art gallery without the exercise.
When the bottom of the market for lazy images falls out, some "artists" may try to sell them as NFTs or in other novel ways, for there are sure to be be fans of such crap, some of whom will realize they can do it themselves. Making them will be a lot more fun, but only marginally creative and the practice will become a time-waster. Sitting around watching an AI render images will suck up even more screen potato time than searching the Net now does.