LastID
deepfakessecurityAI

Deepfakes and the End of Trust by Default

Bottom Line Up Front

Deepfake technology has made visual and auditory verification unreliable. Organizations need cryptographic proof of human identity that operates independently of what someone looks or sounds like.

MJ

Matt Jezorek

January 15, 2026 · 3 min read

Share

The trust assumption is broken

For decades, enterprise security relied on an implicit assumption: if you can see and hear someone, you can trust they are who they appear to be. Video calls, phone conversations, and in-person meetings all carried an inherent level of identity assurance.

That assumption no longer holds.

Modern deepfake technology can generate synthetic video and audio that is indistinguishable from the real thing. Not "pretty close" or "convincing at a glance," but genuinely indistinguishable to the human eye and ear.

Real attacks happening now

In 2024, a Hong Kong finance worker transferred $25 million after a video call with what appeared to be the company's CFO. Every person on the call was an AI-generated deepfake.

This is not a theoretical risk. It is happening today, and the tools to create these attacks are becoming cheaper and more accessible every month.

The attack surface extends beyond video calls. AI-generated voice messages, manipulated audio recordings used as "proof" of authorization, and synthetic media used in social engineering campaigns are all active threats.

Why detection is not the answer

The instinctive response is to build better detection. If AI can generate fakes, surely AI can detect them.

This approach has two fundamental problems:

First, detection is a losing game. Generative models improve faster than detectors can keep up. Every advancement in synthetic media quality makes detection harder. The defender must be right every time; the attacker only needs to succeed once.

Second, even a high-accuracy detector creates a false sense of security. A 99% accurate deepfake detector means that 1 in 100 attacks succeeds. At enterprise scale, across thousands of interactions per day, those odds are unacceptable.

Proof over perception

The solution is not to get better at judging what is real. It is to remove the need to judge at all.

Cryptographic proof of identity operates on a fundamentally different model. Instead of trusting what you see and hear, you trust a verifiable credential that proves the human was present and consented to the interaction.

This proof is:

  • Bound to the person: A challenge that only the enrolled human can complete, backed by biometrics.
  • Bound to the moment: Time-stamped and challenge-specific, so it cannot be replayed
  • Cryptographically verifiable: The relying party can verify the proof without trusting the communication channel
  • Independent of media: It does not matter what the person looks or sounds like

The new trust model

Organizations need to shift from "trust by default" to "verify before acting." Not with more security questions or callbacks, but with proof.

High-risk actions, including fund transfers, access grants, contract approvals, and system changes, should require cryptographic proof of the human authorizing them. Not a video call. Not a voice match. Not an email from the right address. Proof.

The technology to do this exists today. The question is how quickly organizations adopt it before the next deepfake-driven loss makes the decision for them.

deepfakessecurityAI