Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

AI-generated images have become a new form of propaganda this election season

An image that was likely created with artificial intelligence tools that purported to show a young survivor of Hurricane Helene. The image got millions of views online, even after its provenance was questioned.
An image that was likely created with artificial intelligence tools that purported to show a young survivor of Hurricane Helene. The image got millions of views online, even after its provenance was questioned.

After images of the devastation left by Hurricane Helene started to spread online, so too did an image of a crying child holding a puppy on a boat. Some of the posts on X (formerly Twitter) showing the image received millions of views.

It prompted emotional responses from many users - including many Republicans eager to criticize the Biden administration’s disaster response. But others quickly pointed out telltale signs that the image was likely made with generative artificial intelligence tools, such as malformed limbs and blurriness common to some AI image generators.

This election cycle, such AI-generated synthetic images have proliferated on social media platforms, often after politically charged news events. People watching online platforms and the election closely say that these images are a way to spread partisan narratives with facts often being irrelevant.

After X users added a community note flagging that the image of the child in the boat was likely AI-generated, some who shared the image, like Sen. Mike Lee (R-Utah), deleted their posts about it, according to Rolling Stone.

But even after the image’s synthetic provenance was revealed, others doubled down. "I don’t know where this photo came from and honestly, it doesn’t matter." wrote Amy Kremer, a Republican National Committee member representing Georgia, on X.

"It's a form of political propaganda, a way to signal interest and support for a candidate, almost like in a fandom kind of style," said Renée DiResta, a professor at the McCourt School of Public Policy at Georgetown University, who recently wrote a book about online influencers. "The political campaigns then can pick it up, can retweet it, can boost it and are then seen as being sort of in on the conversation, maybe in on the joke themselves."

Other images likely created by AI that depicted animals on roofs barely above flood water spread after Hurricanes Helene and Milton. After former President Trump and his running mate JD Vance amplified baseless claims about Haitian immigrants in Springfield, Ohio eating pets and wild animals, AI-generated images of Trump cuddling cats and ducks flooded X and other social media platforms popular with Republicans.

Generative AI is one more tool for supporters to interact with their campaigns online, said DiResta. "It's cheap, it's easy, it's entertaining, so why wouldn't you?"

Truth versus facts in images

In the same post defending her decision to keep the synthetic image up, Kremer also wrote: "it is emblematic of the trauma and pain people are living through."

The separation between facts and the idea of a deeper truth has its echoes in Western philosophy, says Matthew Barnidge, a professor who researches online news deserts and political communication at the University of Alabama. "When you go back and dig through the works of Kant and Kierkegaard and Hegel, [there’s] this notion that there is some type of deeper truth which often gets associated with something along the lines of freedom or the sublime, or some concepts like that".

To be clear, when individual fact checks pile up against politicians, research suggests it can change how voters feel about them. One study showed that fact checks did change how Australians feel about their politicians. But another study showed that fact checks of Trump did not change Americans’ views about him even as they changed their beliefs about individual facts.

Fact checking images can be trickier than text, said Emily Vraga, a health communication researcher at University of Minnesota. "There are a lot of studies showing that people have a very hard time knowing what is real versus not when it comes to online imagery." Vraga said, "this was true even before ChatGPT."

Arresting visual images can evoke emotions before people have the time to process what they are seeing. A team of researchers looked at Pinterest posts portraying vaccine needles which include a misleading image with an extra-large needle and brightly colored fluid.

"The needle is much smaller - that's not like a super convincing correction." said Vraga, "It's part of the larger narrative that vaccines are unnatural and dangerous."

Hyper-realistic, often uncanny AI-generated images may live in a gray space between fact and fiction for viewers. While a photorealistic image of pop star Taylor Swift endorsing Trump was clearly not Swift on closer inspection, the passing resemblance had an impact on people who saw it, said New York University art historian Ara Merjian. "it wouldn't have been a scandal if someone had drawn Taylor Swift in a comic endorsing Trump."

The pop star cited the AI-generated images and "misinformation" as one reason for her endorsement of Vice President Kamala Harris as president.

Driven by money, AI slop overwhelms online spaces

The AI images also partly filled the space that legacy news media left behind as the news industry has shrunk and tech platforms have deprioritized news. "Who's moving into that space? Propagandists," said Barnridge. "Propaganda often presents itself not as news, but kind of seeps in in other ways through lifestyle content."

Politically inspired images are just a fraction of the AI-generated images on social media platforms. Researchers have spotted AI-generated cakes, kitchens and shrimp-like Jesuses rising out of the sea. Some led to websites vying for traffic, others tried to pry viewers of their personal information and money.

An investigation by 404 Media found that people in developing countries are teaching others to make trending posts using AI-generated images so Facebook will pay them for creating popular content. Payouts can be higher than typical local monthly income. Many of the images created by these content farms evoked strong, sometimes patriotic emotions. Some images looked realistic, others were more artistic.

An AI-generated image purporting to show Vice President Kamala Harris wearing communist garb. The image was shared X's owner, Elon Musk, to millions of his social media followers in August.
/
An AI-generated image purporting to show Vice President Kamala Harris wearing communist garb. The image was shared X's owner, Elon Musk, to millions of his social media followers in August.

Dangers to the election

One of the more striking AI-generated images related to politics was boosted by X’s owner Elon Musk. It portrayed someone resembling Harris wearing a red uniform with a hammer and sickle on her hat.

Eddie Perez, a former Twitter employee who focuses on confidence in elections at nonpartisan nonprofit OSET Institute, said the image is meant to portray Harris as un-American. "The message that there is no legitimate way that Kamala Harris and her party could actually win a presidential election."

Images like these are fanning political polarization, which Perez said could undermine people’s trust in election results. For months, Republicans have suggested Democrats are likely to steal the election from Trump through various kinds of subterfuge.

"There are many, many different ways and modalities that that strategy is being implemented. Generative A.I. is only one of many different tools in the toolkit, so to speak. Do I think that it is singularly bad or worse than many of the others? No. Does that mean that it's benign? No," said Perez.

Copyright 2024 NPR

Huo Jingnan (she/her) is an assistant producer on NPR's investigations team.