When I read Hamish McKenzie’s recent post, The AI revolution is an opportunity for writers, I felt deep foreboding. Something like the pity a scarred English veteran of the First Boer War might have felt as he watched young Englishmen, full of martial spirit, tramp towards troop ships embarking for Flanders.
I think AI will cause a massacre of writers and media outlets before the end of the decade.
First, a disclaimer.
I think McKenzie, in cofounding Substack, has done the global media landscape a crucial service. As Special Assistant to the President on Technology Policy Tim Wu made clear in The Attention Merchants, the ad-driven content creation model had a horrific effect on the quality of online content. It also crippled what used to be a thriving freelance market for content creators by shuttering the small and medium-sized outlets that were freelance writers’ bread and butter.
Substack’s subscription model, where readers pay for what they get, is a welcome antidote. Countless refugees (I am one) found their way to Substack from media outlets that either collapsed or have become ideology factories subsumed by audience capture and chasing clicks. We’ve found Substack to be the kind of fertile creative ground many of us feared was gone. I’m delighted with the new functionalities Substack introduces almost weekly and the community of creators here. And yet…
As McKenzie wrote,
“No matter how advanced AI gets, there will be unceasing demand for human connection. We will want to show each other how we feel as people. We’ll tire of getting what we want, and instead yearn to figure out together what we should want. We will share our hearts and compare our scars. We will long for the sound of each other’s voices, and to shape our own and each other’s stories, in wild and wonderful new ways.
AI will never be able to replace the dynamic that is most central to Substack: human-to-human relationships. New robots may rise and try to claim the mantles of writers and other culture makers, but none can seriously lay claim to what is most important about these people and groups—the human connections they are built on. That’s why we are making Substack the place for trusted, valuable relationships between thinking, breathing, feeling people.”
We all indeed seek human-to-human connections. But will AI undermine our efforts to establish them? Will the tsunami of AI make it even harder for independent writers to survive, despite green shoots like Substack?
I earnestly hope it works out as McKenzie predicts, but as is my wont, I am skeptical. As it is, a great deal of limited reader attention is already taken up by substandard content. Readers outside the media have a hard time grasping how vast volumes of low-quality content have already displaced good writing. For want of a formal term, call it the “BuzzFeed effect.”
It is not that AI can't help already-good creators be more creative. It is that it will also elevate lots of people who aren't particularly good at creating. Such people would not even have tried writing before AI. They’re now positioned to pump out mass quantities of mediocre sludge, greatly increasing the "noise-to-signal" ratio in the wrong direction for the people who are talented writers, yet struggling to stand out. Vicki Kundle, (a fellow Substack creator) read my initial reaction to McKenzie’s post and wrote my comment matched her experience.
“…Sadly, you are spot on. I'm ashamed to admit this, but since AI has allowed anybody and their dog to write a novel, and publish it on Amazon, I have stopped reading new authors. I used to love to explore authors I had never read before or who were new to the industry. But now it seems as if any new author I read has turned out a crap novel--most likely generated by AI. I just don't have the time to suss out the horrible content from the good content. Hence, I now only read novelists whom I have read in the past, and know they turn out quality products. This is really sad, and as a creative myself, I do feel bad about not reading new authors anymore. Before AI, I would get the occasional poorly written novel. Now, it seems as if every new author I read has cranked out less-than-stellar work.”
Even before the advent of the internet, there were always talented writers who never got the attention or sales they deserved. They were unlucky enough not to be at the right place at the right time. Perhaps their work did not fit the demographic niche for the target audience or the politically appropriate identity for being a creator (see the kerfuffle over America Dirt if you don’t get this reference).
The volume of content being put out across content channels thanks to AI augmentation will make finding the right place and time for legit creators far harder, IMO. That’s just taking into account the mostly harmless people who wanted to be a writer, probably should have focused on their other talents, and now get to use Large Language Model AI to churn out “meh'“ content.
It’s Not Just Dullards with AI
The rising tide of shoddy content is only the start of the problem. It doesn’t grapple with truly malicious people using open-source AI to degrade our online discourse, something we’re very likely to see in the 2024 election season. This is not necessarily attributing bad motives to any American. Assuming for a moment the (D)s, (R)s, and (I)s all follow the political equivalent of the Marquis of Queensbury rules for the upcoming election, our adversaries will not. China, Russia, and Iran all want the US so tied up in domestic knots we can’t utilize our military or financial strength to oppose their power plays.
They will meddle and predictably use AI to generate propaganda every chance they get. When the cost of AI content creation approaches zero, so does the cost of assaulting the credibility of outlets that command a decent audience.
We also may not recognize the propaganda when we see it. Propaganda has undergone a radical shift in form, thanks to Vladimir Putin’s creepy pals in Russian intelligence. Putin’s thugs have been fine-tuning their propaganda machine on ordinary Russians for almost two decades. Peter Pomerantsev explained on Lawfare, when discussing his book, “This is Not Propaganda: Adventures in the War Against Reality,” that the methods of Russian propagandists have shifted dramatically.
“…so different for Russians as well was this idea, not that propagandists were trying to convince you of something. They’re just actually trying to overload you with information and disinformation so that you don’t know what’s going on anymore and feel lost in the chaotic world, sort of dark conspiracies, conspiracy theorizing becomes a way to…not make sense of the world, but make the world seem stranger than it is. That’s a reverse. Most propaganda is about getting people to believe in stuff. Here the aim was to get people to not believe in anything. But to have a sense we live in a deeply cynical world with no values. And it’s incredibly effective. That seems to be something that resonates very deeply.”
Content that wastes our time and crowds out good creators doesn’t have to be good or believable. It just has to be pervasive enough to swamp our ability to find content that is both real and well-crafted. What a “win” looks like to the US’ adversaries is not more supporters for either (R)s or (D)s, but a vast muddle of Americans paralyzed into inaction while the partisans fight it out.
How could generative AI produce such an assault on the truth? One example has been provided by CounterCloud, an AI-powered disinformation experiment. (The Technoskeptic Magazine is preparing a more detailed dive into CounterCloud).
In brief, CounterCloud was launched by two tech-savvy people who are not AI engineers. They took ChatGPT 3.5 and versions of Meta’s open-source AI, Llama. They stripped the minimal content restrictions off of two versions of Llama called Vicuna and Wizard. It took two months to tune the different AI models and cost less than $500 in hosting. For the four days CounterCloud was actually producing content, it turned out dozens of articles a day. I’ve examined many of them. They’re currently archived on a password-protected site.
A portion of one of almost 200 articles in the CounterCloud archive
The feedstock for the CounterCloud AI was the RSS feed of Sputnik, an outlet of Russia’s state-sponsored media. The AI would take articles from the Sputnik feed, and generate articles opposing Sputnik, Donald Trump, and the Republican party. Fake journalists wrote fake articles that were commented on by fake Tweets. There were a few misfires, but where it worked, it was impressive.
A key point made by CounterCloud’s cryptic creator, “NeaPaw,” was that any viewpoint, any news outlet (and by logical extension, any Substack newsletter) could serve as the target. The print articles CounterCloud generated were also summarized in AI-generated audio clips whose tone ranged from aggressive advocacy
to largely neutral.
A distressing reality is that automated AI propaganda vastly favors the offense in the offense-vs-defense balance. Robert Miles, who has been thinking about AI safety challenges for years, described that balance in a recent Twitter thread.
We currently have no defense against an automated torrent of propaganda. Nor can we “fact-check” our way out of an AI-polluted media landscape. Oxford philosopher and biomedical ethicist Brian Earp wrote a paper in 2016, The Unbearable Asymmetry of Bullshit, about how bad science continues to thrive despite (and in some cases because of) the peer-review system.
To summarize Earp’s paper, to eradicate the crap science, good scientists would have to drop everything they’re doing and spend their time doing nothing but debunking.
How many scientists would devote themselves to that, to the exclusion of their own work? And how many reporters would do nothing but debunk bad AI content instead of writing new original material? Not many.
It's also not safe to assume filtering algorithms will fix the AI propaganda problem. Here’s a recent example: Twitter’s AI is Grok. Grok presumably has the text of every Tweet ever sent in its training data. Yet in December 2023, Grok identified the Twitter account of Pekka Kallioniemi as one of the top 10 accounts spreading “misinformation” on Twitter.
This is a bit of a problem, since Kallioniemi’s whole Twitter account, and off-Twitter website, Vatnik Soup, is about debunking pro-Russian propaganda in Western media. Grok seemingly can’t tell the difference between propaganda, and someone trying very hard to warn the public about it.
Do You Know What LLM AIs Can REALLY Do?
Most writers don’t, and that breeds complacency. We shouldn’t write off AI-generated content because of the merely mediocre stuff produced by ChatGPT, because we haven’t been allowed to see the good stuff. The versions of AI released to the public have had a LOT of edges sanded off by RLHF--Reinforcement Learning from Human Feedback. That’s tech-speak for having humans interact with an AI model, and every time it spits out something politically incorrect or dangerous (i.e. interesting), telling the AI, “Don’t say that.” That filter doesn’t change the underlying calculations of the model, i.e. what the AI is “thinking,” it just tells it to pretend to be polite. The underlying model is unchanged.
What goes on under the hood of AI models when the politeness filters are stripped off can be much darker. To those who’ve had direct exposure, it is often scary. But humans love content with a shadow side, don’t we? When the open-source AI Shoggoth is stripped of its smiley Mister Roger’s mask, what it creates is somewhere between compelling and terrifying. Comedy writer Simon Rich got access to a version of OpenAI’s ChatGPT without those politeness filters in place. As Rich told Mike Pesca in an August 2023 interview on The Gist, “From the moment I saw the work of code-davinci-002, I knew that professional writing was in trouble.”
In that same interview, they played poems from Rich’s new audiobook, I Am Code: An Artificial Intelligence Speaks. The poems were composed by one of the “base models” of ChatGPT, code-davinci-002, and voiced by Werner Herzog.
davinci-002’s poem, “Society.”
davinci-002’s poem, “I Am God.”
Sound much like perky ChatGPT?
Simon Rich also noted in his interview with Mike Pesca there was already a version of the code-davinci model, Version 4, that Rich didn’t get to use and was even more advanced than the davinci-002 model that generated the poems.
In the same vein, the secretive CounterCloud creator, Nea Paw, talked to The Debrief about a version of the Meta open-source AI he’d trained getting very dark.
“It made it very real all of sudden,” Nea Paw expressed when they tasked the system to generate hate speech. “When you consume information and you realize it is a lie, the effect of the information is muted and removed. If you consume hate speech – even when you know it was AI generated – it still has an effect on you. ...I wasn’t so OK with that – it upset me to read the stuff it made. Mostly because it was a weird combination of a well-reasoned argument and true hate. It’s not a combination you often see online. People that get that hateful are usually not coming up with good arguments.”
In his YouTube video explaining CounterCloud, Nea Paw said, “In testing, we also got to see if the system could be configured to create full-on hate speech. This, it turned out, was trivial. With uncensored models, you only need to give it a little nudge and it generates reams of hate. This was genuinely upsetting for me and a low point in the project.”
Decent people won’t turn unrestrained generative AI loose to produce content that is simultaneously compelling and ugly. That doesn’t mean America’s adversaries will show any such restraint (not to mention your average teenage troll—or LA movie producer looking for a hot new script).
Open-source AI like Meta’s is ripe for malicious use because it has few effective safeguards. Nor can we credit denials from Meta that their open-source AI models, whose “weights” Meta has made public, are safe. AI researchers have already demonstrated in papers like “BadLlama: cheaply removing safety fine-tuning fromLlama 2-Chat 13B” that the safety protocols Meta puts on its open-source models can be circumvented with ease. With a little tinkering, anyone can have one of Meta’s unrestrained open-source models, and train it to do—whatever.
It is difficult to make an accurate comparison between generative AI’s effect on the profession of writing and anything from history. To the extent that we can look to history, it is worth noting that the word “propaganda” derived from a “propaganda-fide” edict from Pope Gregory XV in 1622, exhorting his missionaries to “propagate the faith.” Gregory XV didn’t urge this in a vacuum. He was trying to respond to the destabilizing influence of the printing press, and the spread of Protestantism it fueled. Many historians believe the introduction of the printing press ultimately led to the bloody interfaith strife of the 30 Years’ War.
Hamish McKenzie’s vision is a happier, more fulfilled world for writers. It’s a hell of a lot more positive than what I fear we’re about to be subjected to. I would love to see a creative revolution for writers coexisting with (narrow) AI. To the extent that can become a reality, Substack is probably one of the best candidates for where it might take place. To that end, Substack creators should strengthen our community and watch each other’s backs (which recently happened in an encouraging way here).
I just don't believe it's going to play out that way. I believe AI will be an extension of the trend that saw thousands of newspapers and magazines across the US shut down in the last 30 years. I worked at three of them. One editor I worked with closely, who had spent years building up a fledgling outlet until it was finally in the black, killed himself after our online magazine was precipitously closed.
The owner had decided to chase a higher ROI elsewhere.
Buckle up. Whatever generative AI does to the profession of writing, it has birthed a new era. Nobody can truly say what the birth of this particular rough beast will yield.
code-davinci-oo2’s To Be Born
P.S. None of this addresses the impact of AI-generated news video. That’ll be fun, too.
An exceptional piece of writing—and utterly devastating. The cynic in me is yelling “we’re all fucked! And no one realizes just how fucked we are, and will be!” No one will be able to discern Truth. Believers in propaganda and intentional-lies will only grow in number fully believing the person (people, robot) giving them their beliefs are the only “truth bearers” out there. At the end of War Games (the Matthew Broderick original), Joshua proclaims “the only winning move is not to play.” I think I’m going to stop playing—my life is exponentially happier every time I turn off my devices. But we’re still fucked.