iProov, a provider of biometric identity verification solutions, announced that an attack scenario demonstrated by its in-house Red Team has been published by MITRE ATLAS, a global knowledge base for AI security, threat mitigation, robustness, and privacy.
The case study confirms a high-risk vulnerability in remote identity verification processes, exposing users worldwide.
iProov’s contribution includes a detailed procedure showing how face-swapped imagery injection attacks can bypass mobile Know Your Customer (KYC) systems.
The study places iProov alongside contributions from organisations including Microsoft, NVIDIA, IBM, Intel, Cisco, Palo Alto Networks, Kaspersky, CrowdStrike, and Trend Micro, all working to inform the development of future AI defence frameworks.


“Contributions from across industry, academia, and government, ranging from red-team findings to operational threat insights, are essential to advancing the accuracy and completeness of the MITRE ATLAS knowledge base. When organisations openly share data and expertise, we collectively enhance the security and resilience of AI-enabled systems,”
said Doug Robbins, Vice President, MITRE Labs.
Andrew Newell, Chief Scientific Officer at iProov, added:

“We’ve seen an explosion in attack vectors relating to identity verification over the last 12 months, largely driven by advances in generative AI and the wide availability of low-cost tools. The publication of this latest MITRE ATLAS case study is part of the vital process of identifying and documenting such methodologies.”
The Red Team demonstrated that AI-generated deepfakes and virtual camera applications can bypass active liveness detection. This system analyses image artefacts and user movement.
By streaming deepfake video feeds during mobile KYC, the team successfully authenticated under a fictitious identity. This highlights risks to banking, financial services, and cryptocurrency applications.
iProov’s research reinforces the need for continuous verification. It also underscores the importance of adherence to rigorous standards, such as the European CEN 18099, which sets robust testing protocols for liveness detection.
The work aims to inform security analysts and AI developers across sectors. It encourages collaboration to strengthen AI security, threat mitigation, and privacy practices.
Featured image credit: Edited by Fintech News Singapore, based on image by sumitbiswas35244 via Freepik


