A $25 Million Digital Pandora's Box: Deepfakes

We’re Not Ready for What’s Coming

audio-thumbnail
Listen to a 🤖 read this.
0:00
/212.664

During the Christmas season, I was around family, and one of the topics we discussed was deepfakes and upcoming AI threats and scams. This has been a long time coming and something we have mostly been unwilling to recognize as a threat because we have willingly given away most of our personal information to companies that promised they could protect such data better than we could. This is partially a lie and has made us less tech-savvy over the decades. Now, we are about to reap the fruits of this digitally illiterate generation.

I have had personal experience training an AI model on a specific voice, as a vendor I use imposed the model in my subscription. All the hours of my client's recordings are now part of the data they use for training their model. In reality, regardless of whether the company asks for permission or not, the moment that audio goes public, ANY company can access and use it. This is how AI has been trained, there is an immeasurable amount of publicly available data that we have been giving away.

By no means am I opposed to technology; on the contrary, I have been very much in favor of changes and still am. But now, I am more conscious that we have opened the digital Pandora's box, and most of the civilian population has been digitally lazy for the last few decades.

When will deepfakes (that is, manipulating digital media to impersonate or recreate a fake situation) be so advanced that they fool you? Are we close to getting phone calls or video calls from people who will use our vulnerabilities to scam us?

Even more so, could we, in disgrace, be falsely accused because of any ideological position that goes against current trends and have our lives ruined?

The answer to that is: we are already there. And it will get worse and worse if we do not start taking our digital privacy and footprint under control.

Less than a month ago at the time of this writing, Apple settled the claim for Siri eavesdropping for $95 million. They have not acknowledged being guilty of the charge, but in such a murky digital landscape, this is the smoke of a real fire. We know of many cyberattacks, good and bad agents eavesdropping, and video dropping into average citizens, not even high-profile targets, to know that the noise is there for a reason.

Last year, a Hong Kong-based worker of a UK engineering group, "Arup," was fooled by a video conference with his CFO (and possibly some coworkers) into wiring $25 million to different banks. He realized only after talking with the rest of the higher-ups that none of what he had seen on his screen was real and that he had been scammed in real time.

British engineering giant Arup revealed as $25 million deepfake scam victim | CNN Business
A British multinational design and engineering company behind world-famous buildings such as the Sydney Opera House has confirmed that it was the target of a deepfake scam that led to one of its Hong Kong employees paying out $25 million to fraudsters.

The technologies that allow deepfakes are quickly reaching a point where ill-intentioned users can access them with very little technical knowledge.

We are already in the water—we need to swim. There is plenty of good that will come out of this new GenAI era, but we are very much not ready to properly protect our digital identities. We need to start taking this more seriously and not just favor technologies that protect our privacy but actively help others.

There is one group that is at the most disadvantage: our parents or grandparents. Unless they were part of the tech advancement group that initiated the internet era, it is likely they never became tech-savvy enough to face these challenges if they come their way. Educate yourself so you can help educate others, and take the small steps you can to start protecting your digital footprint and your permanent record.

Until the next one,
J

Some reads and a video:

Apple to Pay $95 Million to Settle Siri Lawsuit
The allegations about Siri contradict Apple’s long-running commitment to protecting the privacy of its customers.
Lawmakers target AI-generated “deepfake pornography”
A 15-year-old girl who fell victim to artificial intelligence-created deepfake pornography is pushing Congress to pass a bill that would require social media companies and websites to remove non-consensual, pornographic images created with AI. Jim Axelrod reports.
Deepfake news - Today’s latest updates
New Hampshire investigating fake Biden robocall meant to discourage voters ahead of primary
New Hampshire officials are investigating reports of an apparent robocall that used AI to mimic President Biden’s voice before the primary election.
GitHub’s Deepfake Porn Crackdown Still Isn’t Working
Over a dozen programs used by creators of nonconsensual explicit images have evaded detection on the developer platform, WIRED has found.