Sign Up to Our Newsletter

Be the first to know the latest tech updates

[mc4wp_form id=195]
Tech News

Watch out – even small businesses are now facing threats from deepfake attacks

Watch out – even small businesses are now facing threats from deepfake attacks




  • Three in five businesses have experienced deepfake attacks recently, Gartner finds
  • Audio and video deepfakes are becoming more accessible to attackers
  • Prompt injection is also giving criminals access to sensitive company information

Gartner says even small businesses are facing a spike in cybercrime, and AI could be to blame – more than three-fifths (62%) of organizations reporting AI-driven attacks in the past year.

The firm’s study found three in five (62%, again) experienced deepfake attacks, with 44% experiencing deepfake audio attacks, making this the most common attack vector compared with video deepfakes (36%).

Prompt-injection attacks against AI tools (32%) and attacks on enterprise generative AI application infrastructure (29%) were also noted, showing how AI isn’t just being used to strengthen crime, but it’s also serving as a useful vulnerability for many criminals.

Is AI causing more cybercrime?

“As adoption accelerates, attacks leveraging GenAI for phishing, deepfakes and social engineering have become mainstream, while other threats – such as attacks on GenAI application infrastructure and prompt-based manipulations – are emerging and gaining traction,” Gartner VP Analyst Akif Khan explained.

The report details how rapid AI development has seen deepfakes go from complex to instant, with audio deepfakes now being generated in real time to make them highly convincing and personalized.

Although real-time, person-specific deepfakes remain very expensive, only time stands between limited use and widespread use.

On the field, cybersecurity firms and analysts are seeing deepfakes being used as an initial attack vector, before attackers revert to simpler and cheaper methods. For example, scammers sometimes fake a CEO on a call before switching to text-only social engineering methods.

When it comes to exploiting companies’ AI systems, attackers are frequently observed tricking systems into revealing sensitive information or abusing integrations to execute code by giving malicious prompts.

Looking ahead, companies of all sizes – not just multinational enterprises – are being advised to up their game, with the zero-trust approach emerging as a firm favorite to block out unwarranted activity.

You might also like



Source link

Craig Hale

About Author

TechToday Logo

Your go-to destination for the latest in tech, AI breakthroughs, industry trends, and expert insights.

Get Latest Updates and big deals

Our expertise, as well as our passion for web design, sets us apart from other agencies.

Digitally Interactive  Copyright 2022-25 All Rights Reserved.