Introduction
This news article has been flagged for fact-checking due to growing public concern about whether the rise of sophisticated “agentic” AI systems is fueling an unstoppable wave of identity theft and deepfake-related fraud. With new technologies rapidly entering organizational and government systems, readers are questioning whether anyone can truly keep up with the pace—making it crucial to separate well-founded threats from exaggerated claims.
Historical Context
As artificial intelligence has advanced from basic automation to highly autonomous “agentic” AI, the cybersecurity landscape has changed rapidly. Over the last decade, identity-driven threats have outpaced traditional network attacks, with cybercriminals now focusing on exploiting digital credentials and synthetic “non-human identities” (such as bots or AI agents) to penetrate defenses. The proliferation of deepfake technology and AI-powered identity theft is not limited to speculation—high-profile incidents and legislative responses, like Ohio’s 2025 surge in AI-driven fraud and federal laws addressing deepfake abuse, have underscored the real and present risks facing both individuals and organizations.
Claim #1: “Agentic AI systems are creating new opportunities for identity theft and deepfake fraud that governments and companies are powerless to stop.”
The evidence confirms that emerging agentic AI is indeed opening up new vectors for identity theft and deepfake fraud. Cases in Ohio in 2025 revealed a dramatic increase in AI-driven crimes, from deepfake video scams to voice-cloning fraud. Financial institutions have also recorded a significant spike in synthetic identity attacks, with identity fraud rates rising year over year. However, the assertion that governments and companies are “powerless” is misleading and lacks context. Legislative measures like the TAKE IT DOWN Act (enacted May 2025) and tools such as Vastav AI show that both governmental and corporate actors are actively responding, though significant challenges remain. The fight against these crimes is ongoing and difficult, but not hopeless or static. New laws and detection technologies continue to evolve alongside the threats, reflecting a more nuanced reality than the article’s tone suggests.
Claim #2: “Industry reports contend that non-human identities (NHIs) now outnumber human users by 82-1.”
This dramatic figure is accurate and well-supported by recent industry research. Both TechRadar and Rubrik Zero Labs confirm that NHIs, including API accounts and AI agents, outnumber humans 82 to 1 within many enterprise systems. The hyper-growth of connected devices, bots, and automated agentic software in business environments has led to a complex identity environment where machine-driven accounts now vastly outnumber employees. This explosive growth increases the number of potential targets and entry points for cybercriminals, underscoring the urgency of robust identity management.
Claim #3: “The overwhelming majority of today’s breaches… are predicated on exploiting trust and valid credentials rather than circumventing network defenses.”
This claim is fully validated by recent data. Industry studies indicate that the majority of breaches exploit privileged credentials and trust in identity systems instead of traditional hacking techniques. Forbes cites that three out of four cyberattacks now hinge on credential compromise. Microsoft’s own internal analyses find over 97% of identity attacks are password-related. This illustrates that the attack surface has shifted substantially, with attackers targeting the human and non-human identities that grant system access, rather than seeking out technical vulnerabilities.
Claim #4: “As the adoption of agentic AI, remote work, and cloud migration accelerate, identity management—not networks—has become the primary attack surface.”
This assertion is accurate. As organizations move to cloud-based environments, embrace distributed workforces, and deploy autonomous AI agents, the focus of cyber threats has moved from breaching network perimeters to compromising identities. Delinea and Deep Instinct reports both reveal dramatic increases in attacks that bypass old-style perimeter defenses, targeting cloud credentials, API keys, and machine identities—an attack surface projected to reach over 45 billion NHIs by 2025. Security professionals are now emphasizing identity resilience as organizations’ most important defense.
Conclusion
After a thorough review comparing the article’s claims with the latest research and cybersecurity trends, it is clear that agentic AI technologies do create new, substantial risks for identity theft and deepfake abuse. The article’s underlying warnings about the growth of non-human identities and the centrality of identity-based attacks are well-substantiated. However, its implication that governments and companies are nearly powerless overlooks the surge in both legislative action and technological progress actively aimed at these threats. Although eliminating these risks completely is not feasible at present, the convergence of policy, innovation, and public awareness is steadily building more robust defenses. Individuals and organizations alike should remain cautious but not resigned—real, ongoing progress is being made, and vigilance paired with smart policy can mitigate harm.
Want to make sure you know the facts? Submit your own news article or claim for free fact-checking—our mission is to help you stay informed and in control.
Take Action Now
Stay ahead of digital misinformation and protect yourself with DBUNK. Download the DBUNK App today for reliable, real-time fact checks and news analysis!
Link to Original Article
You can read the full news article here.


