[One in a series of occasional posts on technology trends, posted on Thursdays.]
Online disinformation is something I have a a lot of experience with, unfortunately.
Here’s the short version: when we, a team of government researchers, started looking at what we might be able to glean from social media back in the mid-2000s, disinformation wasn’t even on our radar. (I know! Shocker!) Then one day, one of the outside reseachers walked into my office and said he thought he was seeing something new: bad actors on Twitter working in concert to spread lies. And he thought they were Russian.
This researcher was the expert on Russian activity on the internet. He had about 10 years of Russian internet activity under his belt by this time so, yes, I believed him. Turns out he was right.
All this is well known—now. At the time, it was a revelation. Using a variety of analytic techniques, we worked to verify that his suspicion was indeed the truth. I’m going to spare you the details, but one of the important things to remember is that the bad actors’ activity is fluid. Very fluid. They learn from their successes but mainly their failures, and they evolve and adapt. The good guys have to track, scramble, and regroup. It takes a lot of effort and time to prove our suspicions, and by then the bad guys have moved on. That makes it hard to convince the powers that be to take your suspicions seriously.
And by the time they did, it was too late. The bad actors and their capabilities in the online space have metastasized like cancer, impossible to exorcise.
I bring this up because we’re at a similar inflection point with generative AI.
As soon as ChatGPT was introduced, the point was made that it would be a useful tool for disinformation. Without AI, a human has to come up with the ideas to propagate. Individuals on a team tasked with creating disinformation might be given, let’s say, a target of 50 false ideas a day. Those ideas were fed into the team’s bot army, bounced around social media, “liked” and “reposted” by bots and a few unwitting humans in order to see which false statements got the most traction. Those statements would then be amplified, repeated over and over, augmented with memes and fake news stories planted at fake news sites until it started to be reflected outside disinformation circles.
Imagine how fast AI can generate false ideas. It will never get a mental block, never tire. It doesn’t have to be original. It will just iterate, endlessly.
Not only that, but genAI will make it easier to figure out which stupid lie will resonate better with the general public. What kind of false persona needs to be attached to the stupid lie to tug at our heartstrings, to make us more likely to swallow its falsehood.
The Atlantic ran a great story about this recently. (That’s a gift link; you can read without the paywall.) A group of Swiss researchers did an experiment on a subreddit, r/changemyview. They used genAI to generate ideas about societal issues that were open to debate (pitbulls and the housing crisis and DEI programs—anything controversial.) They even generated backstories to go along with the ideas they posted, claiming to be trauma counselors or victims of rape in order to be more persuasive.
When the researchers analyzed the results (and I’m not going to go into the methodology here, for reasons discussed below), they found that they were able to change “a surprising” number of minds on the subreddit. The personalized posts received higher scores on Reddit’s voting system than nearly all human commenters.
Here’s where things start to go off the rails. The researchers, associated with the University of Zurich and whom, one assumes, had their plan cleared by their advisors, did not share with Redditors in advance that this was an experiment. In other words, no one participating in these discussions knew. At the end of the experiment, the researchers told the subreddit’s moderators what they had done—and sh*t hit the fan.
I won’t go into all the back and forth here (please read the article for all the details). Please know that there are handrails for research involving human subjects and I’m sure the University of Zurich carefully considered the project under whatever rules apply in Switzerland. The most important matter, in my mind, is that the researchers felt the experiment would not work if the participants knew content would be AI-generated. “Deception was integral to the study”, the researchers said, and they’re right. (Could they have devised a method that would be ethical and as effective? Probably. Who knows. It’s water under the bridge at this point.) Because this is how we encounter disinformation in real life. It passes itself off as someone’s authentic opinion. It passes itself off as fact. Like all lies, it is designed to be manipulative.
One sad result of this botched project, however, is that the researchers will not be publishing their findings, which means other researchers will not be able to study their numbers, etc., which would’ve informed many other projects. I hope this debacle doesn’t slow or prevent future research, but it can’t help but be a cautionary tale.
Which leaves me to repeat what I believe is the important bottom line: we need protection from abusive use of genAI. We need oversight, policy, regulation. And it has to come from government or a quasi-governmental body because industry will not police itself and they sure as hell won’t coordinate with each other. Each company fighting to become the dominant force in genAI, which translates to making the most money for its shareholders, and doesn’t give a damn whether its creating a hellscape of propaganda in the process. We reward these companies with big IPOs. We buy or use their products and don’t think about the ways they’re exploiting us. Politicians support big tech because big tech donates to them, plus it aligns nicely with the politicians who are, shall we say, unencumbered by ethics.
We need to stand up for ourselves and use our power before we lose it.
Frightening. We've all got to be critical thinkers, and to encourage critical thinking where we can. Thanks for writing about this.