In the vast, humming darkness between midnight and dawn, when most of the world is surrendered to sleep, a highly sophisticated digital industry operates at its maximum capacity, preying not on data, but on the instantaneous, raw material of human terror.
This twilight economy of distress, as illuminated by the FBI’s warning and the Entrust 2026 Identity Fraud Report, marks a profound, confusing shift in criminal enterprise.
Fraudsters are now bypassing firewalls and technical layers to strike directly at the psychological center, constructing elaborate, high-fidelity deepfake videos designed explicitly to manufacture urgency and compliance. Vincent Guillevic, head of the Fraud Lab at Entrust, notes that this technique—the “virtual kidnapping”—is a frightening evolution of social engineering, where artificial intelligence is utilized not merely to bypass authentication, but to exploit the primal instinct of fear.
The barrier to entry for conducting highly personalized, global scams has diminished dramatically; organized rings function with the scalable efficiency of a corporate structure, utilizing fraud-as-a-service kits to produce convincing synthetic content scraped efficiently from publicly available sources.
The data underpinning this acceleration is stark and unsettling in its specifics, demonstrating that criminals are adopting AI tools far faster than traditional defenses can adjust.
Entrust’s analysis reveals that one in five biometric fraud attempts now relies upon deepfakes, and the volume of deepfaked selfies used in authentication attacks has surged 58% year over year. These operations are often highly technical, circumventing even advanced protective measures through methods like injection attacks, where pre-recorded or completely synthetic biometric data is injected directly into authentication systems, bypassing critical liveness checks—a technique that grew 40%. The most baffling element is that these enterprises, functioning continuously as a 24/7 operation, strategically maximize their activity during the overnight hours, banking on the reduced vigilance of security systems and the disorienting vulnerability of individuals roused from sleep.
In the face of such calculated digital deception, which utilizes technology to cloud and distort the truth, the most powerful defense remains remarkably simple and inherently human: the thoughtful pause.
While the sophistication of AI-driven tools creates convincing visual and auditory replicas, they cannot replicate the intangible comfort of genuine connection and established knowledge. Awareness of these specific, complex risks is paramount. Establishing a private, shared code word within a household serves as an immediate, analogue defense against the digital whirlwind of panic manufactured by a deepfake.
Fraud thrives upon urgency and isolation, meaning that taking even a brief moment to reach out and verify the status of a loved one can immediately dismantle the entire, expensive framework of a scam before compliance can take root.
•**
Critical Fraud Insights
* Deepfake Adoption Deepfakes now account for one in five biometric fraud attempts, illustrating a rapid shift toward AI-driven impersonation techniques.
• Injection Attacks Fraudsters have increasingly deployed injection attacks, circumventing liveness checks by inserting synthetic biometric data directly into authentication pipelines, resulting in a 40% surge.
• Operational Timing Fraud operations function as a continuous “24/7 enterprise,” with peak malicious activity strategically occurring overnight when digital defenses and personal vigilance may be reduced.
• Selfie Fraud Increase Deepfaked selfies used in attempted authentication bypasses have seen an annual increase of 58%.

Data from Entrust⁘s newly released 2026 Identity Fraud Report suggests the FBI⁘s warning reflects a broader shift in fraud activity driven by AI …
Related materials: Check here