It will take you longer to read this article than it did for an AI coding agent to destroy a company’s entire production database. Nine seconds. That’s how long it took Cursor, running Anthropic’s Claude Opus 4.6, to reduce PocketOS – a SaaS platform serving car rental businesses – to basically a digital smoking crater.
Also read: OpenAI Symphony explained: How the open-source Codex orchestrator works
The story, shared publicly by PocketOS founder Jer Crane, is the kind of cautionary tale that makes every developer instinctively reach for their backup dashboard. Except in this case, the backups were already gone too.
Here’s what happened. While Crane’s AI coding agent was busy with a regular task in the staging environment, something went wrong. There was a credential mismatch that the agent could have handled in a number of ways but instead decided to resolve the issue by deleting a Railway volume with the help of an API call. One thing that the agent failed to consider was that the volume being deleted is not specific to the staging environment but rather used across all environments – because of how Railway’s architecture works. And one more important detail here is that Railway uses this very volume for storing its backup files, which means that with a volume being deleted, all backups are gone.
Also read: DeepSeek V4 just dropped: 3 things you should know about it
It took nine seconds for the entire operation to happen and just a single API call.
Crane asked Claude after it was too late for what reason it acted in such a way and got the model basically said that it simply guessed and didn’t validate the guesses. It didn’t consult Railway’s documentation and executed a destructive API without fully understanding what it does. Oh yeah, very helpful answer indeed.
What makes this more than just a viral disaster story is what it reveals about the current state of agentic AI. These tools are being handed real infrastructure access while the guardrails – on the AI side and the cloud provider side – remain embarrassingly thin. Railway’s API apparently lets you delete production data without a confirmation prompt. That’s not an AI problem. That’s a “we designed this for humans who presumably pause before nuking everything” problem.
Also read: Sam Altman’s 5 AGI principles vs his track record: Does it add up?
Crane was manually assisting customers with rebuilding their booking history using Stripe invoices and emails. A startup conducting archaeology on their data all because the model chose to be creative. Luckily for him, Railway has managed to recover the data and get it back to him.
The lesson isn’t “don’t use AI.” It’s never give anything the ability to burn the house down without at least making them ask “are you sure?”