#Infosec2025: Combating Deepfake Threats at the Age of AI Agents

May 8, 2025
#Infosec2025: Combating Deepfake Threats at the Age of AI Agents

After years of generative AI adoption, the buzz has waned and attackers and defenders alike are working hard to integrate AI-powered tools into real-world use cases.

Reducing the barrier to entry for script kiddies and enabling new capabilities for high-skilled black hat hackers means AI is on every defender’s mind.

AI-generated cyber-attacks were cited as the primary threat to orgnaizations in the Infosecurity Europe Cybersecurity Trends Report 2025. AI is also driving increased investment with 71% of those who expect to raise their cybersecurity budgets citing AI as the leading reason, the report found.

Meanwhile, genAI companies are poised to launch the next generation of their AI-powered assistants, AI agents that will perform tasks on our behalf.

In light of these new AI advancements, it is crucial to discuss how organizations can amount a defense against AI attacks.

Against this backdrop, one of the first keynote sessions at the upcoming Infosecurity Europe 2025 conference, will bring together AI experts who will provide insights how defenders can fight against AI threats.

The session, titled “Calling BS on AI - Strategies to defeat Deepfake and other AI attacks,” will focus on deepfakes and AI-powered social engineering campaigns, two of the most prominent AI threats today.

Andrea Isoni, Chief AI Officer at AI Technologies, will be accompanied by Heather Lowrie, Co-Founder of Resilionix; Zeki Turedi, Field CTO for Europe at CrowdStrike; and Graham Cluley Host of the 'Smashing Security' and 'The AI Fix’ podcasts.

Too Late for Text & Image Deepfake Detectors

Speaking to Infosecurity, Isoni said he believes that a distinction should be made between defending against AI-powered texts and images, on the one hand, and AI-generated video and audio on the other.

“Sadly, detecting fake content in images and texts is very hard and is going to fail often, primarily because AI-powered generated text and image generation technologies are already too good and improving,” Isoni said.

He believes that detection technologies for synthetic texts and images should be part of “baseline security,” alongside the use of passwords, encryption technologies, enabling multifactor authentication (MFA) and training the workforce.

He also argued that these technologies must embed AI.

“To fight AI at scale, we do need AI. For any situation where a massive volume of information is at play, AI agents or other AI-powered software solutions will be needed,” he said.

“Yes, there are some software solutions aimed at detecting synthetic content poisoning without necessarily using AI, like watermarking, but they have shown mixed results so far,” he added.

Isoni is more optimistic about the efficacy of deepfake detectors in combating AI-generated video and audio. This is for two main reasons:

  1. AI generation technologies for audio and video are still not good enough
  2. Data about a specific person is harder to catch – unless you are famous, it is harder to get a long enough sample of you talking or recording a video

However, whatever these deepfake detectors ultimately become, Isoni believes they will not be sufficient against AI-generated content and deepfakes.

“Deepfake detectors will not put an end to synthetic content poisoning, just like antiviruses did not put an end to malware,” he said.

Using Standards and Regulations for AI Risk Assessment

Outside of basic security measures and detection tools, Isoni advocated for organizations to assess their worst-case threat scenarios and develop an incident response plan based on them, incorporating a risk management approach.

“Standards like ISO 420001 and regulations, like the EU AI Act, can help organizations elaborate a risk-based plan as they clarify what the risks are and the fines involved,” the expert said.

Isoni also advised organizations interested in mitigating AI threats to explore the emerging industry of 'AI safety layer' products, designed to secure and control AI models from being hacked and harming end-users with malicious output.

These solutions could prove especially useful as the adoption of AI agents grows, he concluded.

Learn More About AI Threats at Infosecurity Europe

AI threats and security risks will be a primary focus of this edition of Infosecurity Europe. Register here to attend and discover the latest developments and research in genAI and the broader cybersecurity landscape.

The full program can be viewed here.

The 2025 event will celebrate the 30th anniversary of Infosecurity Europe, taking place at the London ExCel from June 3-5, 2025.

Tags:

No tags.

JikGuard.com, a high-tech security service provider focusing on game protection and anti-cheat, is committed to helping game companies solve the problem of cheats and hacks, and providing deeply integrated encryption protection solutions for games.

Explore Features>>