Marty Mauser would have hated this. The 1950s ping-pong hustler in the movie, Marty Supreme, spent years chasing greatness in table tennis across the world, driven by the belief that the sport demanded something essentially human – instinct, nerve, the ability to read another person across a table. Sony AI just put an Ace-sized dent in that belief.
Also read: Google’s new TPU 8t and TPU 8i explained: What the 8th-gen chips mean for AI agents
Ace, Sony’s autonomous table tennis robot, has become the first machine in history to defeat elite-level and professional players, Minami Ando and Kakeru Sone, in competitive ping-pong. The matches were played under official International Table Tennis Federation rules, with licensed umpires presiding. This was not merely a demo or a stunt. A robot showed up, served, rallied, and won.
Also read: Google has a Sergey Brin problem, and it’s called Claude Code
Let’s take a stroll down memory lane to 2016, where Google’s DeepMind AlphaGo triumphed over the Go world champion Lee Sedol in one of the most pivotal AI moments of the decade. Go was the ultimate challenge in strategic board games, which was far too sophisticated and far too intuitive for an AI algorithm to solve. That was until AlphaGo solved it. However, the feat accomplished by AlphaGo was purely cognitive, which meant that it could move pieces on a computer screen. Ace surpasses that standard because it engages in physical competition with a human being who can turn, fake, and improvise. AlphaGo outwitted you. Ace outperforms you.
Ace is built around high-speed perception, reinforcement learning, and a robot capable of reacting to a flying ball in real-time. It has nine high-definition cameras installed across the playing field that allow it to monitor the logo on the ball and measure its spin. This degree of precision makes human vision look like a drawback.
Researchers have been trying to crack competitive ping-pong since 1983. For over four decades the sport resisted, because table tennis punishes rigid programming. Sony’s team trained Ace the only way that works – through experience, failure, and millions of simulated repetitions using reinforcement learning, the same foundational method that powered AlphaGo a decade ago. The study was published in Nature on April 22. Sony AI researcher Peter Dürr, who co-authored the paper, said it plainly, “There’s no way to program a robot by hand to play table tennis. You have to learn how to play from experience.”
Also read: China’s drones can be charged mid-air using microwave beams, here’s how