We are entering a time when AI is taking over everything. Unfortunately, that can also mean cloning your voice, ace, and identity with AI deepfakes.
Technology is as beneficial or damaging as the people who use it. Hackers and online attacks are pretty common, but now, some people are using AI-generated media to hijack reality. If this continues, it will be nearly impossible to know what’s real and what isn’t. This could lead toward the end of real, human-based talent and move toward AI deepfakes, which could cause trouble for unsuspecting people.
Why should you be concerned about AI deepfakes?
Although it can be fun to put people in situations they’ve never been in, just to be funny or provide a little entertainment, things have gotten out of hand and some people have used AI to harm others. These items have gone from experimental curiosities to weapons of deception, making it difficult to know what’s real and what isn’t.
Some of the characteristics of these items that make them scary are:
- They look and sound too real
- Target real-world institutions and events
- Easy to make, hard to detect
- Ruin lives and reputations
- Undermine Democratic institutions
Examples of AI Deepfakes
Explosion near Pentagon in 2023
Panic swept social media in May 2023 when an image appeared showing and explosion near the Pentagon. The image looks real and was shared by may respected people on social media. This image showed a large plume of smoke rising from a government building, suggesting America was under attack by terrorists once again. The image was not real, it was an AI-generated image. This image and its brief coverage caused a short-lived dip in the stock market, which the creator could have taken advantage of.
President Zelenskeyy asking Ukrainian troops to surrender
It would be a shocking turn of events for this President to surrender, but a video circulated in March 2022 showing the Ukrainian President calling for troops to lay down their weapons and surrender to Russian forces. This video was completely fake, but it was weaponized video and one of the AI deepfakes that could have caused some serious problems. The video was fake; the AI-generated footage used Zelenskyy’s face and voice to create the video.
Joe Biden robocall to voters
Before the New Hampshire presidential primary, many voters received robocalls that sounded just like President Joe Biden. The message received was to “stay home and save your vote for the November election.” Everything sounded like Biden, but he didn’t make the call or record his voice to be disseminated for such a call. This was a form of election sabotage, and voters who believed the call was real might have missed their chance to vote in the primary.
Revealing elevator entrance for Rashmika Mandanna
Another one of the many AI deepfakes that surfaced was a video of the popular Indian actress entering an elevator wearing revealing clothing. The problem was that Rashmika Mandanna was never there and the video turned out to be fabricated but done well enough that this caused temporary problems for the actress. This is a case that shows how an image can be weaponized and could potentially create a situation that could damage someone’s reputation.
A Deepfake warning using President Barack Obama
A 2018 video of President Obama circulated online with him calling the current president “a total and complete dipshi*t” while also warning viewers not to believe everything they see online. Everything looked real, but the video was a public service campaign to help the public understand how easy it can be to manipulate reality using AI. That was seven years ago. Can you imagine how more advanced AI has become since that time? It was a controlled demonstration, but it went viral and sparked commentary, which is what makes it terrifying.
A deepfake heist
In 2023, scammers targeted a multinational firm in Hong Kong to the steak $35 million by impersonating the company’s CEO. This was done using an AI-generated voice recording and it worked. In only a few minutes, millions of dollars were siphoned away to the scammers. This is proof of what hackers and scammers can do with AI deepfakes, making it nearly impossible to detect the issue before a problem occurs. The money was transferred to the scammers and then vanished, making it untraceable.
AI deepfakes are dangerous and make it so that we should question everything we see online. Before believing something that seems strange or unusual, it’s a good idea to check with a few sources to verify information.