If you’ve been using Meta’s AI chatbot to generate text or images, there’s an important privacy issue you should be aware of. A bug in the system may have allowed other users to see your private prompts and the responses generated by the AI. Although the bug has now been fixed, it was live for several weeks, raising concerns about the security of your data when using AI tools. Meta told TechCrunch that no misuse was detected, but the vulnerability shows how even trusted platforms can have unexpected lapses.
The issue was discovered by Sandeep Hodkasia, the founder of security testing firm AppSecure. Meta paid him $10,000 in a bug bounty reward for privately disclosing the bug on December 26, 2024, according to the report. Meta then rolled out a fix on January 24, 2025.
Also read: Meta sets new rules to tackle unoriginal content on Facebook
According to Hodkasia, the bug was linked to how Meta AI handles prompt editing. When users tweak a prompt to get different text or image responses, Meta’s systems assign a unique number to that specific prompt and response. While monitoring his browser’s network traffic, Hodkasia discovered that by simply changing this number, he could view the prompt and AI-generated reply from another user.
Meta’s servers weren’t properly checking whether the person asking to view the content was actually allowed to see it. And since the unique numbers used for prompts were “easily guessable,” as Hodkasia described, a determined attacker could have scrape users’ original prompts by quickly switching prompt numbers using automated tools.
Also read: From iPhone 17e to M5 MacBook Pros: Here’s every Apple product expected in early 2026
This isn’t the first time Meta AI has faced privacy concerns. When the standalone Meta AI app launched earlier this year to rival tools like ChatGPT, some users accidentally shared what they thought were private chats publicly.