A $25 Million Digital Pandora's Box: Deepfakes
We’re Not Ready for What’s Coming
During the Christmas season, I was around family, and one of the topics we discussed was deepfakes and upcoming AI threats and scams. This has been a long time coming and something we have mostly been unwilling to recognize as a threat because we have willingly given away most of our personal information to companies that promised they could protect such data better than we could. This is partially a lie and has made us less tech-savvy over the decades. Now, we are about to reap the fruits of this digitally illiterate generation.
I have had personal experience training an AI model on a specific voice, as a vendor I use imposed the model in my subscription. All the hours of my client's recordings are now part of the data they use for training their model. In reality, regardless of whether the company asks for permission or not, the moment that audio goes public, ANY company can access and use it. This is how AI has been trained, there is an immeasurable amount of publicly available data that we have been giving away.
By no means am I opposed to technology; on the contrary, I have been very much in favor of changes and still am. But now, I am more conscious that we have opened the digital Pandora's box, and most of the civilian population has been digitally lazy for the last few decades.
When will deepfakes (that is, manipulating digital media to impersonate or recreate a fake situation) be so advanced that they fool you? Are we close to getting phone calls or video calls from people who will use our vulnerabilities to scam us?
Even more so, could we, in disgrace, be falsely accused because of any ideological position that goes against current trends and have our lives ruined?
The answer to that is: we are already there. And it will get worse and worse if we do not start taking our digital privacy and footprint under control.
Less than a month ago at the time of this writing, Apple settled the claim for Siri eavesdropping for $95 million. They have not acknowledged being guilty of the charge, but in such a murky digital landscape, this is the smoke of a real fire. We know of many cyberattacks, good and bad agents eavesdropping, and video dropping into average citizens, not even high-profile targets, to know that the noise is there for a reason.
Last year, a Hong Kong-based worker of a UK engineering group, "Arup," was fooled by a video conference with his CFO (and possibly some coworkers) into wiring $25 million to different banks. He realized only after talking with the rest of the higher-ups that none of what he had seen on his screen was real and that he had been scammed in real time.
The technologies that allow deepfakes are quickly reaching a point where ill-intentioned users can access them with very little technical knowledge.
We are already in the water—we need to swim. There is plenty of good that will come out of this new GenAI era, but we are very much not ready to properly protect our digital identities. We need to start taking this more seriously and not just favor technologies that protect our privacy but actively help others.
There is one group that is at the most disadvantage: our parents or grandparents. Unless they were part of the tech advancement group that initiated the internet era, it is likely they never became tech-savvy enough to face these challenges if they come their way. Educate yourself so you can help educate others, and take the small steps you can to start protecting your digital footprint and your permanent record.
Until the next one,
J