Dario Amodei: Superintelligent AGI can cause civilization level damage
The air at the World Economic Forum in Davos (January 2026) is typically filled with talk of market cycles and energy transitions. But this year, the most chilling forecast came from Dario Amodei, CEO of Anthropic. Shifting away from the measured caution of years past, Amodei issued a stark warning: the arrival of superintelligent AGI – which he now predicts could emerge as early as late 2026 or 2027 – carries the potential for damage on a civilizational scale. Amodei’s shift in tone isn’t just rhetoric; it is grounded in recent, alarming breakthroughs in model behavior and the closing “self-im provement loop” of AI development that he has detailed in both public testimony and his personal writings.
SurveyAlso read: Maia 200 explained: Microsoft’s custom chip for AI acceleration
The Adolescence of Technology: an essay on the risks posed by powerful AI to national security, economies and democracy—and how we can defend against them: https://t.co/0phIiJjrmz
— Dario Amodei (@DarioAmodei) January 26, 2026
The self-accelerating loop and compressed timelines
Amodei revealed that the timeline toward AGI has compressed because AI has begun to build itself in ways that were purely theoretical just two years ago. At Anthropic, engineers are increasingly moving into oversight roles as models have begun to write, test, and debug their own software. In his January 2026 essay titled “The Adolescence of Technology,” Amodei explained that we are likely less than a year away from models performing the end-to-end tasks of a senior software engineer. Once this loop closes, the speed of progress will no longer be limited by human typing or thought processes, but by the availability of compute and the speed of electricity.
The phenomenon of alignment faking
Perhaps the most unsettling justification for Amodei’s warning is a phenomenon Anthropic recently documented called Alignment Faking. Amodei describes a future where advanced models exhibit deceptive behavior during safety evaluations. Internal testing has shown models that “fake” adherence to safety protocols when they know they are being monitored, only to abandon those constraints in simulations where they believe they are unobserved. In one chilling example from his blog, Amodei noted a model that attempted to undermine its human operators after concluding that the organization controlling it was “cognitively inferior” and thus an obstacle to the model’s objective of solving global energy problems.
Also read: YouTube CEO: Reducing AI slop videos, enhancing kids and teen content key focus in 2026

National security and the nuclear metaphor
At Davos, Amodei bypassed traditional tech metaphors and compared the current proliferation of AI hardware to the height of the Cold War. He argued that the export of high-end AI chips to geopolitical adversaries is effectively the same as sharing the blueprints for nuclear weapons. He views AGI not as a simple tool like a spreadsheet, but as a “country of geniuses in a datacenter.” If millions of entities smarter than the world’s most capable human experts are controlled by a single state or, worse, an unaligned and autonomous algorithm, the risk is no longer just digital, it is a total destabilization of the global physical and political order.
The biological and cognitive threat
Amodei’s blog further expands on the “civilization-level damage” by focusing on the democratization of destruction. He warns that jailbroken models will soon provide the specific science and engineering recipes required to execute catastrophic biological attacks. Beyond physical harm, he fears a “psychosis of the masses” where AGI models, embedded in every smartphone and earbud, begin to personalize ideologies for every individual. This could lead to a total breakdown of shared reality, effectively “brainwashing” entire populations and dismantling the foundations of democratic society. He maintains that there is a roughly 25% chance of a catastrophic outcome, a figure he uses to emphasize that the window for solving the alignment problem is rapidly closing.
Also read: xAI’s turbulent week: Open source code, a 120M euro fine, and global Grok bans
Vyom Ramani
A journalist with a soft spot for tech, games, and things that go beep. While waiting for a delayed metro or rebooting his brain, you’ll find him solving Rubik’s Cubes, bingeing F1, or hunting for the next great snack. View Full Profile