The list of reasons to be suspicious of generative A.I. is long, and it feels as if it gets longer every day. Companies use it as an excuse to lodge extra spyware into our devices, it gets a ludicrous amount of funding despite being what is essentially a really bad secretary, people keep saying it’ll get better but it hasn’t and shows no signs of doing so, it’s measurably contributing to climate change—I could go on. But today, I want to focus on a specific topic, and that’s the use of generative A.I. to spread dis- and misinformation.
Before we get into it, a brief note on terminology: disinformation refers to false information that is known to be as such by the party sharing it, and is done so with malicious intention. Misinformation, on the other hand, is shared without that knowledge. For example, RFK Jr. falsely claiming that vaccines cause autism is disinformation, because he knows it to be and is spreading it with the goal of dissuading people from seeking immunizations for their children, which will lead to sickness and death. However, if a relative hears RFK Jr. claim this, sincerely believes him because of his position, and repeats the claim to others, they are spreading misinformation. They believe it to be true, and are sharing it as they would any other form of pressing knowledge. This distinction is important to generative A.I. in that it describes two separate but related issues, with one demonstrating the ease of the other.
In terms of generative A.I., we’ll be describing misinformation as data acquired from honestly-shared false information in the vein of common misconceptions. Generative A.I. combs the entirety of the web and spits out something that’s cropped up enough to seem like the commonly agreed upon answer. It doesn’t think, rather, it regurgitates. This means that it can be a potent tool in spreading common misconceptions and other sorts of seemingly benign but ultimately harmful pieces of misinformation. It does this for the same reason that you probably do—it doesn’t know any better. In its case, it literally can’t. If enough sources seem to say that goldfish have short memory, then it must be true even if research reveals that goldfish can actually remember several months into the past.
While this may seem like an unimportant gripe, it actually is rather urgent due to the use of generative A.I. in search summaries and by students. We’ve been taught to take the internet as gospel. If we don’t know something, we look it up. And for some, that’s evolved into if they don’t know something, they look to generative A.I. It’s proven that once someone learns a piece of false information, it becomes harder for them to accept the correct version. This becomes even more difficult if the information came from a source they’ve been taught to trust for their entire lives. Even if generative A.I. only contains errors a small percentage of the time—say, 0.00001%, or one out of every hundred thousand searches—consider the fact that search engines such as Google receive queries numbering in the billions every single day. Are you seeing the problem yet?
The sheer scale of generative A.I. is what makes it such a misinformation disaster. But that also makes it a dangerous weapon in spreading disinformation, which, in terms of generative A.I., we’re describing as information spread with the express intent of poisoning a generative A.I. ‘s dataset and influencing its responses. This is a known and pressing issue, with networks of user-barren websites such as the Russian-operated Pravda network publishing millions of articles each year with the sole purpose of planting propaganda in the mouths of generative A.I. to be regurgitated to and spread by unwitting users. It enables the spread of disinformation on an enormous, incalculable, and virtually untraceable scale.
This scares me. Frankly, if it doesn’t scare you we must be such vastly different people that I can’t even understand why you started reading this article in the first place. Information is the basis of democracy. With the slow death of newspapers, the internet is many citizens’s sole source of knowledge. And as the internet is increasingly overrun with A.I. generated nonsense, that means that the single font of news and learning for many people now runs with poison. The basis of democracy itself is being filled with poison.
We all need to be afraid, and we all need to take a huge step back from generative A.I. Our future may depend on it.