Cybercriminals are using artificial intelligence (AI) to launch more sophisticated social engineering attacks, and experts are warning that it is becoming increasingly difficult to distinguish between what is real and what is AI-generated.
This trend is being highlighted at the UK government's AI Safety Summit, which is focusing on the risks of AI and strategies to mitigate them.
Prominent ways generative AI tools are being used by malicious actors is to launch more realistic phishing emails and using deepfakes to impersonate the voice of senior business leaders to defraud companies out of vast sums.
These threats are on the radar of renowned social engineering expert Jenny Radcliffe, aka the People Hacker. During the recent ISC2 Security Congress, she told Infosecurity that AI will be “game-changer” in social engineering attacks.
“Unfortunately, its on the side of the criminals because its difficult to distinguish what’s real and what’s AI-generated. The technology is learning all the time, correcting any mistakes that we do spot, and I think normal people are going to really struggle to spot a scam or con that’s AI generated,” said Radcliffe.
A Human Answer to a Technical Problem
During her keynote address at the ISC2 Security Congress, Radcliffe argued that we must put our faith in humans to overcome AI-based threats. Speaking to Infosecurity, she said: “Unfortunately it’s a very technical problem that can only be solved by a human solution, which is knowing what to look for.”
No tags.