As LLMs and AI systems embed themselves more deeply into our cognitive process, especially related to how we express our thoughts in written words, a great irony is emerging. It should give every keyboard-happy internet user some serious pause and concern.
For those communicating only through emojis, not only would the ancient Egyptians be proud of you by the way, this doesn’t concern you as much (for now). For everyone else, here’s what’s happening…
Firstly, a February 2026 study conducted by ETH Zurich and Anthropic called ‘Large-scale online deanonymization with LLMs’ has shown that LLMs can now identify and unmask anonymous online users’ identity with scary levels of precision simply by analysing how they write. For example, based on how you write on social media platforms like Facebook, LinkedIn, Reddit, etc, whether it’s your own real posts or comments you are making under an alternate pseudonym, how you write can be uniquely traced back to you – the individual. Researchers in the study did exactly that, and had a 67% success rate with 90% precision in correctly identifying messages and comments made by anonymous accounts to people with real LinkedIn profiles.
Turns out how you write, the words you choose more often than not, structure of your sentences, all of that is your fingerprint in prose. What once took a dedicated human forensic investigator hours to figure out, automated AI can now do it cheaply in minutes.
Secondly, and directly at the other end of the spectrum, is the rising trend of millions of people delegating their writing to AI tools. ChatGPT polishes LinkedIn posts, Gemini autowrites emails in your inbox, Claude irons out grammatical mistakes, NotebookLM is disrupting how students and teachers express themselves. Many more AI tools exist out there, of course, shaping how people write. All of that results in prose that’s cleaner, grammatically accurate, and nearly always the same.
Also read: Private AI Compute explained: How Google plans to make powerful AI private
When you juxtapose the two things, it’s crazy how the same GenAI principles that’s capable of identifying who you are based on the unique style of your writing are also busy erasing that unique fingerprint down to something that’s indistinguishable. Isn’t that ironic?
If the research study proves LLM-based de-anonymisation is real (which is another nail in the coffin of privacy in the age of the internet, by the way), its effectiveness depends wholly on there being a distinctive human voice to track and make sense of to begin with. As ChatGPT, Gemini and other AI writing tools homogenise the written word across the expanse of the internet, strangely they are also slowly erasing the very human signal needed for successful identification. Funnily enough, we are simultaneously becoming easier to track and also harder to find, purely in terms of the words we choose to express ourselves.
It all comes down to identity, preserving who we are and our sense of self. And here, I’m not alluding to the continued loss of online privacy, as much as I’m trying to point out a bigger existential angst. Writing isn’t merely a tool for communication, it’s a tool for formalising our thoughts. It ultimately reflects how we reason, our perspective and outlook, and what we find worthy of communication. When we increasingly outsource this process to AI, we start to erase who we are – at least in terms of the written word.
As AI pulls us in two different directions simultaneously, a world where our words can betray us and another where our words no longer sound like us at all, which one of those futures frightens you more?
Also read: AI is making you worse at thinking: Wharton study rings serious alarm bells