A notable increase in deepfake content has been observed globally, particularly in North America and the Asia-Pacific region. According to a prominent digital identity verification firm's report, India, along with Bangladesh and Pakistan, ranks among the top ten countries in the Asia-Pacific region most impacted by identity fraud facilitated by deepfake technology. The Sumsub Identity Fraud Report, originating from the UK, indicates a substantial upswing in cybercrimes of this nature in 2023, with expectations of further escalation in the coming year.
The report delineates the prevalence of deepfake identity fraud across various countries in the Asia-Pacific region, with Vietnam leading at 25.3%, followed by Japan at 23.4%, Australia at 9.2%, China at 7.7%, and Bangladesh at 5.1%. These insights are drawn from an analysis encompassing over two million fraud attempts across 224 countries and territories in 28 industries.
Global trends in deepfake content indicate an alarming surge, with North America experiencing a 1,740% increase, and the Asia-Pacific region witnessing a 1,530% rise compared to the previous year. Similar trends have been observed in the Middle East, Africa, and Latin America.
The cryptocurrency sector is particularly vulnerable to deepfake fraud, constituting 88% of incidents in 2023, followed by fintech at 8%. Other prevalent fraud techniques include money muling, fake IDs, account takeovers, and forced verification, the latter witnessing a notable 305% increase since 2022.
The report underscores two emerging trends in identity fraud: an increase in forged documents from developed economies and a heightened targeting of non-regulated entities due to the absence of stringent regulatory frameworks.
The misuse of artificial intelligence (AI) for identity theft involves exploiting deepfake technology to subvert biometric authentication, enhance social engineering tactics, and bypass fraud detection systems. Fraudsters employ tactics such as creating synthetic biometric data, utilizing realistic chatbots, generating deepfake videos for impersonation, producing counterfeit documents, and manipulating AI algorithms.
To counteract these threats, experts advocate for the implementation of robust identity security measures. These measures include stringent Know Your Customer (KYC) protocols, secure data storage, strong authentication methods, regular training and awareness programs, transaction monitoring, vendor vetting, updated security systems, incident response planning, physical security measures, and regular audits and reviews.
While many companies currently rely on the KYC process for fraud prevention, Sumsub CEO Andrew Sever cautions that an alarming 70% of fraud activity occurs beyond the KYC stage, emphasizing the need for additional measures. Frances Zelazny, CEO of the biometric authentication platform Anonybit, suggests a novel approach involving persistent biometrics throughout the identity management lifecycle, balancing privacy, security, usability, and cost.
Industry leaders advocate for the strategic use of AI in designing effective defenses against deepfakes and other AI-based fraud tactics. Transmit Security, a US-based cybersecurity company, recommends leveraging AI to detect adversarial attacks and enhance identity security, thereby offering a comprehensive solution to combat illegal activities involving AI and deepfakes.
Â