Once, call center agents could trust they knew who was on the other end of the line. Multi-factor authentication (MFA), security questions, and verbal passwords served as reliable safeguards against fraud. Unfortunately, those days are behind us. Deepfake technology—once reserved for mimicking celebrities—has now become a potent weapon against businesses, with call centers among the most vulnerable targets.
Deepfake fraud attacks skyrocketed by 3,000% last year. Unlike phishing emails, which often reveal themselves through spelling mistakes or suspicious links, audio and video deepfakes are disturbingly convincing. A recent survey found that 86% of call centers are concerned about the risk of deepfakes, yet 66% doubt their ability to detect such threats effectively.
Fraudsters use deepfake technology in multiple ways to exploit call centers:
Navigating IVR Systems
Attackers employ voice deepfakes to bypass IVR authentication systems. Armed with answers to security questions and even one-time passwords, bots can gain access to sensitive account details, such as bank balances, and identify lucrative targets.
Account Takeover (ATO)
By cloning customers’ voices, scammers trick agents into updating account details like email addresses or phone numbers. This allows them to intercept OTPs, order new checks, or request replacement cards. According to a TransUnion report, nearly two-thirds of financial industry respondents attribute most ATOs to call center breaches.
Sophisticated Social Engineering
Deepfakes amplify the effectiveness of social engineering. For instance, fraudsters may use synthetic voices to navigate IVR systems and gather basic account information. They then call agents directly, using this information to manipulate them into granting unauthorized access. Even call centers with video verification aren’t safe, as live-streamed deepfake videos can deceive agents into believing they are interacting with legitimate customers.
Advancements in generative AI have made it alarmingly easy to clone someone’s voice or likeness. With just 15 seconds of recorded audio—readily available on social media—fraudsters can create realistic voice clones in seconds using free online tools. Yet, many call centers lack tools to detect these forgeries and often underestimate the threat.
Fraudsters don’t just impersonate customers; they also pose as agents. For example, after a cyberattack on CDK Global, scammers called customers pretending to be support agents to gain system access. This evolution of traditional vishing scams highlights the growing sophistication of deepfake-enabled fraud.
To combat this escalating threat, businesses must adopt a multi-layered approach:
Education and Awareness
Agents need training to recognize synthetic voices and social engineering tactics, such as creating false urgency. However, education alone isn’t enough.
Robust Processes
Stronger caller verification protocols are essential. Processes must integrate advanced tools to detect deepfakes and prevent fraudsters from reaching agents in the first place.
Advanced Technology
Contact centers must go beyond traditional MFA and voice biometrics. A comprehensive solution should incorporate:
This “surround sound” approach stops fraudsters before they can exploit weaknesses in the system.
As call centers remain critical to customer service, the need for robust cybersecurity has never been greater. Companies that fail to act risk exposing their customers—and their reputations—to catastrophic losses. By prioritizing the adoption of advanced tools and technologies, businesses can stay one step ahead in the battle against deepfake-enabled fraud.