Published Sep 10, 2023
• 5 minute read
Revolutionizing Multi-Factor Authentication for an AI-Powered
In the tech world, there’s an age-old axiom we’ve come to embrace: change is the only constant.
In the realm of online security, this change is coming at us faster and harder than ever before. The very tools we’ve relied upon to ensure our privacy and security — think multi-factor authentication, voice and facial recognition — are becoming the tools that sophisticated attackers can exploit.
One such attack on Retool, as recounted by Snir Kodesh, Head of Engineering, highlights that MFA isn’t the silver bullet we once thought.
A hacker contacted Retool employees and urged them to visit a fake internal identity portal. Most employees were cautious, but one fell for the scam. The hacker impersonated an IT team member using deepfake technology to mimic their voice, gaining access to the MFA code and subsequently infiltrating Retool’s VPN and internal admin systems.
Snir’s account serves as a stark reminder that social engineering doesn’t discriminate; anyone can become a target.
Traditional authentication in an untraditional world
Not so long ago, MFA was hailed as the hero of online security. By requiring multiple verification methods, we believed we had found a way to thwart even the most determined of cyber attackers. Your password gets compromised? Not to worry; they won’t have your fingerprint. Lose your phone? They still won’t crack your facial ID.
But today’s digital landscape is different. The tech that promised us safety is itself under attack.
New data on cyberattack trends cites a 38% increase in global attacks in 2022, according to Check Point Research.
The education and research sector was the most attacked industry, averaging 2,314 attacks per organization every week, followed by the Government/military, citing an average of 1,661 attacks per organization every week.
The issue is that the existing security measures in place never anticipated deepfake technology’s ability to manipulate faces and voices. As this technology continues to advance and become increasingly accessible, this issue is poised to worsen rather than improve.
Evolving MFA for tomorrow’s threats
Traditional MFA is no longer enough. We must conceive of MFA as a dynamic entity, one that evolves in tandem with the threats it seeks to counter. It’s not just about multiple factors anymore, but also about the quality and sophistication of those factors.
How to address the current threats:
Layered biometrics: Combine multiple biometrics, such as face, voice, and fingerprint. Ensure they work in tandem, cross-checking one another.
Behavioral analysis: Introduce behavioral biometrics. How a person types, swipes, or even holds a device can be unique to them.
Continuous authentication: Gone are the days of one-time logins. Continuously monitor and authenticate users throughout their session.
Deepfake detection: Implement cutting-edge deepfake detection tools as an inherent part of the authentication process.
Putting in the work
The challenge before us is monumental, but it’s also an exciting juncture in the ever-entwining dance between technology and security.
For the last six years, our team at DeepMedia has been at the forefront of both AI generation and detection, culminating in a groundbreaking solution to combat cyber threats. It’s our belief that you can’t be good at detection unless you’re first great at generation.
Every threat paves the way for innovation. We’re partnered with the Pentagon, large tech companies, and major security providers to ensure people’s safety.
Our DeepID is the only platform on the market that can detect manipulated voices and faces with 99% accuracy, outperforming competitors with our robust dataset from over 20 countries and over 15 different languages.
As we navigate this evolving landscape, we must be agile, informed, and ever-vigilant. The tech community has risen to challenges before, and I have no doubt we’ll rise to this one too.