Deepfakes, phoneys and fraud: Why GenAI is changing the game for security teams - GDS Group

Deepfakes, phoneys and fraud: Why GenAI is changing the game for security teams

By Ben Thompson|15th January 2024

As a journalist and presenter, I talk about the intersection of people and technology all the time.

The impact that emerging tech has on the way we work – on our teams, on our processes, on the way we structure our organisations and business models – is an endlessly fascinating topic. Given the rapid rate of technology change, it’s an environment that shifts and changes constantly. As the old saying goes: things have never moved this fast, and they will never be this slow again.

But unbridled adoption of new tools and ways of working also brings fresh risks and new challenges. And nowhere is this more apparent than in the fast-moving area of generative AI (GenAI).

The impact of GenAI on cybersecurity

A couple of weeks ago I had my mind officially blown.

While hosting GDS Group’s Security Summit, I was moderating a panel discussion on emerging digital technologies in the cybersecurity space. We were looking at how developments in generative AI are lowering barriers to entry for cybercriminals and making it easier to create effective social engineering scams and phishing campaigns.

It was a lively conversation. We had representatives from the Royal Bank of Canada, the United States Space Force, and security firm ExtraHop, until it was revealed that, in fact, we didn’t. Because one of our guests was a fake.

And not just any fake. A deep fake.

Sure, he looked like our expected guest. He moved and sounded like him. But he was an AI-generated avatar, placed there by our speaker to highlight the increasing difficulty organizations and individuals alike face in distinguishing between fact and fiction.

Deepening the deep fake debate

The reveal was a genuine jaw-drop moment – for both myself and my fellow panellists – not least because it perfectly realized many of the concerns we’d been discussing at the summit. How attacks were becoming more sophisticated. How technology was making it easier for bad actors to penetrate our defences. And how the speed at which GenAI was being adopted posed huge challenges to our ability to respond as security professionals.

And as Denny Prvu, Global Director for Architecture, Innovation Labs, Immersive, Quantum and Generative AI over at Royal Bank of Canada – and the man behind the deepfake stunt – pointed out, understanding that this is our new reality (pun most definitely intended) is critical.

“I think it’s key that we don’t fear incoming advances in technology,” he told an audience made up of many of North America’s leading security execs. “It’s not so long ago that Wi-Fi was new to all of us, and now we’re going from mobile devices to VR and AR environments, to digital twins, to generative AI and more. Technological progress is natural and irreversible.”

Instead, he said, we need to better understand the human dimension when it comes to mitigating the impact – and then marry that insight with the application of emerging technologies. “We need to look at the ways people interact with systems. Use common sense. Look at patterns. There are so many great behavioural technologies out there, and if we do our due diligence, we’ll be able to chain them all together and get the best out of them.

“It’s exciting to explore these new fields. We just need to apply some common sense.”

A clear and present danger

Prvu’s fellow panellists agreed that this is something we need to focus on as a matter of urgency. “I think this is a real threat, and one that goes beyond just impersonation,” said Thomas Clavel, Senior Director of Product at cybersecurity firm ExtraHop. “Once you start using avatars and other AI technologies that are capable of imitation, you can penetrate networks much more easily – and from there you can do a lot of damage.”

Brian Hostetler, Director of Cyber Operations at the US Space Force, agreed and called for more collaboration around how the sector is evolving. “There’s really no governmental or industry regulations around AI as yet,” he said. “We need to ensure the use of such technologies is both ethical and legal, and as we start maturing, we need to implement guardrails to govern what that looks like moving forwards.”

As ever, finding that balance between technology, governance and the humans in the loop will be key. “At the end of the day, you need to be able to identify behaviour, and find that edge between what’s human and what’s not human, between what’s human behaviour and what’s suspicious behaviour,” said Clavel. “It’s an arms race. You need to deploy AI technologies, apply intelligence, and only that way can you combat the threat.”

Spot the difference

And while there was genuine concern around the speed at which deepfake technology was evolving – and the proliferation of tools available on the dark web for criminals to access – there are some surprisingly analogue responses emerging to help combat the threat.

“Even with the best training and the best people, it’s very hard to spot these fakes,” said Clavel. “But while training is not going to make you fool-proof, it is essential in reducing the margin of error. Beyond that, you also need to have guardrails in place around what is acceptable behaviour versus what is not acceptable, and what behaviours might require a higher level of security. Governance and education remain critical.”

And there were some great ideas shared amongst the wider audience, too. We also heard from Adam Powell, Executive Director of the Election Security Initiative at USC. He and his team are working on tightening cybersecurity ahead of the 2024 election, and he had some great advice gleaned from working with high-level politicians and their security teams.

“We advise that the first line of defense when receiving a voice call from someone you suspect could be a deepfake is to say, ‘Thank you very much, let me call you right back’. And for video calls, we suggest asking them to turn their heads, because the software currently isn’t very good at rendering ears! That will change in time, of course, so the key is to remain vigilant.”

Staying ahead of the game

From operations to customer and employee experience, from product innovation to complex supplier and partner ecosystems, how we interact with new and emerging technologies is shaping the way we do business.

And of course, the speed at which things are evolving unlocks huge opportunities: new products and services, faster time-to-market, more effective use of time and resources. But making the most of those opportunities means keeping security top of mind. And as the pace of change continues to increase, that gets harder to do.

According to the World Economic Forum, 66% of cybersecurity professionals experienced deepfake attacks within their respective organizations in 2022. And researchers predict that as much as 90% of online content may be synthetically generated by 2026.

Ensuring your business is ready to meet that threat is one of the greatest challenges we face in the next few years. Because if security professionals can’t stay ahead of the deepfakes, frauds and phoneys, what hope is there for the rest of us?

Join us at the next GDS Security Summit to collaborate with some of North America’s leading security specialists and find out what the future holds. We can’t wait to see you there!

Back to insights

Related content

Security
Podcast

Behind enemy lines: Hackers vs. AI security

Join us as we learn from hacker Gordon Long, Senior Offensive Security Engineer at Zoom on how hackers are using AI to their advantage.
Learn more
Security
Article

Navigating the security landscape: Key takeaways from the Digital Summit

In the evolving world of cybersecurity, gathering key industry leaders and professionals to discuss the latest trends and challenges is invaluable.
Learn more
Security
Podcast

Why your cybersecurity needs nexgen pen-testing

Learn from Robin Fewster, the Senior Security Testing Manager at Hargreaves Lansdown, on the evolution of cyberattacks.
Learn more
Security
Article

Win from within: growing exceptional security professionals

Learnings from Jay Wiley at the GDS North America security summit
Learn more
IT
Stories

Pipeline generation and unlocking the c-suite for Deep Instinct

Learn more
Security
Podcast

Cyber Insurance: Are You Becoming Uninsurable?

Learn more
Security
Article

Why Talent, Not Skills, is the Key to Cybersecurity Success

We know there aren’t enough people to meet current and future cybersecurity needs. But are we focused on developing the right areas?
Learn more
Security
Podcast

Conversation With a Hacker

Welcome to season 3 of Strategy for Breakfast! We’re kicking things off with an exclusive conversation with a real-life HACKER!
Learn more
Security
Article

You Clicked a Malicious Link.Now What?

Hackers. Whether you own the company or you just work there, we’re all in their crosshairs.
Learn more