Claude announces ID verification: What it means for your account and privacy
Your AI chatbot now wants to see your passport. Anthropic wants Claude users to verify using their ID. So, if you use Claude and haven’t seen the prompt yet, there’s a good chance you will be getting it soon. AI tools have become very deeply embedded in our daily work and life and anonymity at scale has potential to create real problems – abuse, policy violations, underage access – and a simple email sign-up offers almost no friction against any of them.
SurveyAlso read: Anthropic uses AI agents for AI alignment breakthrough, but at what cost?
How it works
The verification process is handled through Persona Identities, a third-party identity verification partner. Users are asked to present a valid, government-issued photo ID like a passport, driver’s licence, or national identity card, along with a live selfie taken via phone or webcam. According to Anthropic, the entire process typically takes under five minutes. More importantly, your ID images and biometric data are stored by Persona, not on Anthropic’s own systems. Anthropic retains access to verification records but does not copy or hold those images independently. All data is encrypted.
The privacy question

Also read: This mom is running 11 OpenClaw instances to manage her entire family
This is what I have been thinking about since I saw the announcement. Handing a government-issued ID to an AI company feels like a step too far. Anthropic has been explicit about its constraints that verification data will not be used to train models, will not be shared with advertisers, and will not be sold to third parties. Persona is contractually limited to using the data solely for fraud prevention and verification improvement.
Whether you trust these commitments is another question entirely but it is a fair one to ask. There’s something uncomfortable about handing a government-issued ID to an organisation whose entire existence is built on consuming, processing, and learning from data. Anthropic has always appeared to be more principled than most, but “we promise not to misuse it” is not that good of a guarantee. I do hope though the data policies are as well intentioned as they are made out to be.
Anthropic says verification will be prompted for certain capabilities, routine platform integrity checks, and other safety and compliance measures. It’s also to enforce age restrictions, with under-18 access listed as a grounds for account suspension.
Identity verification isn’t a perfect solution to AI misuse. However, its introduction also means that the era of frictionless and consequence free AI access is ending. Platforms are trying to introduce accountability – for users and providers alike. Whether that feels reassuring or intrusive probably says something about how you’ve been using Claude.
Also read: Meet Luna, an AI agent running a full-fledged retail store
A journalist with a soft spot for tech, games, and things that go beep. While waiting for a delayed metro or rebooting his brain, you’ll find him solving Rubik’s Cubes, bingeing F1, or hunting for the next great snack. View Full Profile