Entertainment

Dizzying Deepfakes and Personalized Propaganda: Welcome to the AI ​​Election


I’ve been spending way too much time staring at AI-generated content lately. I’ve watched countless AI-generated videos about things like Will Smith And Donald Trump eat spaghetti together. I’ve seen AI-generated photos of people giving TED Talk people who aren’t actually real. I’ve read and listened to the AI-generated stories that are going viral on TikTok. All of this content is generated by increasingly sophisticated algorithms, sometimes even by the hour. And for me, consuming it is part professional curiosity, part morbid fascination, and part a rhythm I’ve subscribed to so I can make sense of the future and how we’ll live in it.

I got a terrifying glimpse of that future recently when news broke that Trump had accused Kamala Harris used AI to fake her Michigan parade crowd at Detroit Metro Airport. My first instinct when I looked at the photo on my computer of the massive crowd wasn’t disbelief but genuine uncertainty. Leaning forward to get closer to the screen, I actually do wonder if this image was generated by AI.

It didn’t take me long to realize it wasn’t fake. Fact-checkers and news outlets confirmed the 15,000-strong crowd at Detroit Metro Airport. But my initial skepticism revealed a troubling side effect of our new, AI-saturated world: When you start living in AI land, your brain starts to question everything—whether you like it or not. You start to have a creeping suspicion that even the simplest images are actually fake.

The 2024 election cycle is a turning point in the use of AI for political manipulation. We’ve seen a troubling parade of digital deception not unlike the post-truth era we experienced in 2016—only this time, more confusing and often more terrifying. In January, for example, there were AI-generated robocalls using deepfake of the President Joe Bidenvoice of targeting New Hampshire voters, falsely urging them not to participate in the Democratic primary—a chilling demonstration of AI’s potential to create convincing but completely fabricated audio content. Then there’s the steady stream of AI content across social media platforms, including images Taylor Swift Trump support (she doesn’t support him), videos of political rallies with large crowds used to are actually manipulated and memes depicting political figures in fictional scenarios (like Trump holding a gun while wearing an orange sweater, or Biden doing the same in a wheelchair).

Digitally generated content is popping up everywhere, with AI-generated audio clips going viral on TikTok claiming that Biden is threatening to attack Texasimages of Harris at a Communist-style rally, and posts showing celebrities endorsing candidates they did not support. At one point, the White House intervened, confirming that a recording of Biden threatening to attack Texas was fake. Meanwhile, New York Times was forced to issue a recent statement saying that Are not “posted an article legitimizing the false claim that Vice President Kamala Harris is a member of the Communist Party.”

Even efforts to harness AI for ostensibly legitimate campaign purposes have raised ethical concerns, as evidenced by a pro-Trump super PAC Dean Phillipsfailed presidential campaign, created an interactive AI-powered bot using OpenAI technology designed to engage voters. The bot’s creation eventually led to OpenAI is on hold creator account, citing a policy of not using its tools for political campaigns and highlighting the complex ethical landscape surrounding the role of AI in political discourse.

As we process the implications of the first real “AI election,” it’s clear that we’re entering uncharted territory. The line between fact and fiction is blurring at an alarming rate. But this election cycle isn’t a worst-case scenario; it’s a harbinger of things to come. By the time we get to the 2026 midterms, AI will be so advanced that in the right (or wrong) hands, it will be capable of generating hyper-realistic video content that can be used to craft personalized political narratives tailored to each voter’s psychological profile—tapping into both your greatest fears and your deepest desires.

Indeed, the next wave of AI advances is poised to reshape future elections in ways that seem as bizarre today as video AI did a decade ago. AI agentsAutonomous programs that are capable of making decisions and interacting with humans in increasingly autonomous and sophisticated ways are expected to be the next iteration of this technology. And while they will start out fairly innocuous—think: an AI assistant that manages your calendar and email, or an AI travel agent that books trips for you and your family, or an AI therapist available 24 hours a day to help with mental health issues—these agents will clearly (and fairly quickly) be used in negative ways, especially during election cycles.

For example, they could be used to target individuals based on our biological markers. Sorry, I forgot to mention that AI will soon have a lot more information about us at the biological level, including our health and behavior. Why? you can ask. Because you’ll give it to them through the apps and programs you’ll engage with or are engaging with. For example, when you ask an AI about the medications you’re taking or ask it to suggest a recipe for dinner or ask about a medical condition, the AI ​​now knows all that information about you. The more information it has, the more accurate these AIs will be at understanding voter preferences with unprecedented precision. This means that political campaigns can tailor their messaging not only to your voting history but also to your physical responses—measured by changes in your heart rate or skin conductance. through a cameralike MIT does in its research labs, through your phone or TV when you consume media, or just through what you type into your computer. (Don’t forget: Every time you type a prompt for an AI, it learns something about you.)

Not scared yet? Wait until you see how deepfake technology gets even more sophisticated. We could soon face the reality of AI-generated videos being indistinguishable from real footage, allowing for the creation of synthetic political content that can sway even the most jaded voters. And if you think AI will be able to detect other AI, just look at what happened to text over the last year: When ChatGPT first launched in November 2022, its AI detection technology could distinguish between AI-generated and human-generated material with 95% accuracy. But as AI models have become more sophisticated, the accuracy of these detection tools has dropped. Accuracy 39.5%That number could soon drop to near zero.

The nightmare scenario for the next election is where all of these technologies basically blend together like the T-1000 in Terminator 2 after he turns to liquid metal. We’re facing an election landscape where AI agents, armed with our biodata and psychological profiles, create hyper-personalized deepfake content in real time that’s specifically targeted at you. These shape-shifting digital Dementors can adapt their messages on the fly, morphing from a trusted news anchor to your favorite celebrity, and tailoring their speech to your underlying desires and fears. They’ll know when you’re most persuasive based on the queries you type into your favorite AI—or, later, talk to. They’ll also know this based on your sleep patterns and, of course, your good browsing history. They could even predict your voting behavior before you even make a decision yourself.

We, the average human being, would have no chance of distinguishing fact from fiction. We would live in a state of constant uncertainty, where every political information we encounter could be a carefully crafted illusion designed to manipulate our beliefs and behavior. And—yes, there is a And—this is not some distant dystopian future. This is the world we’re hurtling toward, and tens of millions of us are stepping on the gas. If you think you can outsmart AI’s clever ways, hear me out, because that’s how I felt after questioning Kamala Harris’s protest photo: the fleeting inability to trust your eyes is terrifying. Soon, it won’t be a momentary aberration—it will be our permanent reality. And I can assure you, from personal experience, that living in a world where you can’t trust your own perceptions is just as unsettling as it sounds.

News7f

News 7F: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button