Blog

Why Businesses are Unprepared for the Next Wave of AI Scams

Date published:

Jan 6, 2025

Jon Marler

Manager, Cybersecurity Evangelist

SHARE ON
SHARE ON
  • In December 2023, President Joe Biden evidently told millions of Americans via MSNBC the story of getting lost in a grocery store and following a glowing magical pistachio to the exit.
  • A few months into the 2024 election campaign, American voters were left confused and uneasy over an ad run by Kamala Harris discussing the deep-state threat of President Biden - and that her selection was due to her status as a woman of color.

Huh?  The big issue with both videos: They’re fake.  Yet, realistic enough to be shared, reposted, liked, commented on, and believed by millions of viewers, including influential people with large communication platforms like Elon Musk.

And they’re glaring examples of the recent emergence of a new threat to businesses and consumers worldwide: Deepfake audio fraud.  Fueled by advancements in artificial intelligence (AI), this new wave of fraud can mimic voices with such accuracy that even close associates of the victims struggle to distinguish real from fake.

Despite the growing threat, many organizations remain ill-equipped to combat this sophisticated cybercrime. This blog explores why businesses are unprepared, what makes deepfake audio so dangerous, and how companies can begin to protect themselves.

The Rise of Deepfake Audio Fraud

Deepfake technology, once limited to creating videos of celebrities or political figures, has entered the audio world. Using just a few seconds of a person’s voice, bad actors can clone the recording with incredible precision and use it to manipulate people or systems. For instance, fraudsters have successfully impersonated CEOs and senior executives, instructing employees to transfer large sums of money to fraudulent accounts, as was the case with a Hong Kong-based firm in January 2024.

Between 2022 and 2024, incidents of deepfake fraud skyrocketed, with some regions reporting increases as high as 1,740%. Major sectors like finance, crypto, and fintech have seen particularly large impacts, with deepfake fraud responsible for significant financial losses. This growing threat is largely attributed to the democratization of AI, with tools to create convincing deepfake audio becoming more accessible, cheaper, and easier to use.

Why Are Businesses Unprepared?

Despite the surge in cases, many businesses underestimate the danger of deepfake audio fraud. According to the previously mentioned study by Business.com, over 80% of companies still lack protocols for handling deepfake attacks. Worse yet, more than 50% of business leaders admit that their employees have not received training on recognizing or responding to these attacks.

Several factors contribute to this unpreparedness:

  1. Lack of Awareness: A significant portion of executives are not familiar with the capabilities of deepfake technology. Many still see deepfakes as a novelty rather than a serious fraud risk.
  2. Rapid Technological Advancement: Deepfake audio fraud is evolving faster than many businesses can respond. Traditional cybersecurity measures designed to combat phishing emails or malware are often ill-equipped to detect synthetic voices or manipulated audio.
  3. Trust in Existing Systems: Many companies still rely on voice verification systems that are now vulnerable to deepfake technology. Historically, voice was considered a secure authentication method. But with today’s AI, even trusted systems are being exploited.

The Real-World Impact of Deepfake Fraud

The impact of deepfake audio fraud extends far beyond hypothetical scenarios. High-profile examples include cases where deepfake voices were used to impersonate executives, resulting in millions of dollars in financial losses. In 2019, a British energy company lost $243,000 after a senior executive was tricked into transferring funds based on a fraudulent voice message that mimicked his boss.

And as you saw at the beginning of this blog, deepfake fraud has made its way into elections and political disinformation. Pistachio stories aside—In 2024, voters in New Hampshire received robocalls using a cloned voice of President Biden, discouraging them from voting in the primary election. Such incidents highlight this technology's broad spectrum of risks, affecting both financial security and democratic processes.

What Businesses Can Do to Protect Themselves

With the threat of deepfake audio fraud growing, businesses must take proactive steps to safeguard themselves:

  1. Invest in Deepfake Detection Technology: Companies are starting to explore tools that can detect synthetic voices in real-time. Startups such as Pindrop and DeepMedia are developing innovative solutions designed to identify audio deepfakes before any damage is done.
  2. Employee Training and Awareness: Training staff to recognize deepfake audio scams is essential. Since these attacks often prey on human emotions—such as urgency or fear—teaching employees to verify unusual requests, especially financial ones, can prevent costly mistakes.
  3. Multi-Factor Authentication (MFA): Relying solely on voice verification is no longer safe. Businesses should incorporate multi-factor authentication methods, such as SMS codes or biometric verification, to ensure higher levels of security.
  4. Strengthen Internal Procedures: Companies must establish strict protocols for verifying financial transactions. Encouraging a “trust but verify” mindset—wherein employees double-check urgent requests, particularly those involving money transfers—can mitigate the risk of falling victim to these scams.

Conclusion: Strengthening Defenses Against the Rising Threat of AI Deepfakes

As deepfake audio technology becomes more advanced and accessible, the risk to businesses will only increase. The ability of AI to generate convincingly realistic voice clones poses a serious threat to financial security and corporate trust. While many organizations are still catching up to the scale of this threat, proactive steps like investing in detection technologies and enhancing employee training can offer vital protection.

If you’re interested in learning more about AI and fraud prevention, our recent webinar, “Good vs. Bad Actors in AI-Deepfake Voice Technology for Fun and Fraud,” provides insights into how you and your business can defend against deepfake threats.

SHARE ON
Andrea Sugden
Chief Sales and Customer Relationship Officer
Let’s Talk
To get started with a VikingCloud cybersecurity and compliance assessment, email, call or click:
Contact Us