[ad_1]
The Monitor is a weekly column devoted to everything happening in the WIRED world of culture, from movies to memes, TV to Twitter.
Future generations will acknowledge the vibe shift. It happened last weekend, when all of a sudden social media feeds filled with images of Pope Francis, typically a pious and plain dude, looking like a boss in a sleek white puffer coat. It was instantly a meme, a LOL in a sea of bad news. It was also not real. Someone created the image using the artificial intelligence tool Midjourney. But it fooled a lot of people—so many that news outlets began calling it “one of the first instances of wide-scale misinformation stemming from artificial intelligence.”
Just typing that sentence feels haunting. Like the first time you see someone in a red cloak in The Handmaid’s Tale. Not that this portends dystopia. It was, after all, just one image of the pope looking fly. But what if it was an image declaring itself to be a battlefield in the war in Ukraine? Or of President Biden taking some kind of secret meeting? The possibilities of AI generating that kind of misinformation are daunting.
Getting scores of people to fall for a disastrous deepfake of Volodymyr Zelensky takes a little more work than misleading them with a goofy picture of a pontiff, of course. As Charlie Warzel pointed out in The Atlantic this week, everyone uses “different heuristics to try to suss out truth,” and it’s easier to believe Pope Francis would wear a puffer than that, say, those AI images of former president Donald Trump getting arrested are real. So it’s not hard to see why so many just saw them, giggled, and kept scrolling without questioning their authenticity.
But this does set a troubling precedent. The creator of the Pope Coat image wasn’t attempting to mislead anyone. In fact, he told BuzzFeed News he was just tripping on shrooms and trying to think of funny images. But what if it was part of a misinformation campaign? A lot of AI-generated content is already so clean that it’s hard for human eyes and ears to detect its origins.
Viewers would probably never have known that Anthony Bourdain’s voice was faked in the documentary Roadrunner if director Morgan Neville hadn’t told The New Yorker. Deepfakes are already being used as political tools. As it is, skeptics can consult trusted news sources if they suspect an image is fake, but trust in the news media is already approaching record lows. If anyone can now generate an image of anything, and faith in any source that might debunk that image is at an all-time low, who is going to believe their lying eyes?
A few days after the AI-generated images of Pope Francis went viral, the pontiff was taken to a hospital in Rome for a respiratory infection. He’s been improving ever since, but as that (real) news spread, it got a bit lost amidst stories of the fake image. The pope was trending for two very different reasons, and at first glance it was hard to determine which was the one of substance.
The era of social media has transformed the Very Online into pretty good sleuths. Skepticism reigns. But so do conspiracy theories. Beyond the post-truth era is a time when convincing images, text, and even video can be generated from thin air. One of the internet’s great promises was that anyone could broadcast information to a much larger audience than before. For years, the liars have been easier to spot: bad URLs, shitty photoshop, typos—all these things gave the scoundrels away. AI can smooth their mistakes. I’m not Chicken Little, but maybe I just haven’t yet been fooled by an image of the falling sky.
[ad_2]