OpenAI device missing: Inventing the future’s hard, even for Sam Altman-Jony Ive

HIGHLIGHTS

Altman and Ive’s screenless AI dream faces very real-world physics

Building the next era of computing is harder than it looks

Even visionaries must wrestle with hardware, privacy, and human limits

OpenAI device missing: Inventing the future’s hard, even for Sam Altman-Jony Ive

In May 2025, Sam Altman and Jony Ive announced a partnership that would disrupt what personal computing devices meant in the age of AI. Beyond smartphones or laptops, they promised to bring to the fore screen-less devices that were highly ultraportable and equipped with cutting edge AI like ChatGPT available in voice mode. Ironman’s JARVIS commoditised for the average Joe is how I like to imagine it right now.

Digit.in Survey
✅ Thank you for completing the survey!

Not counting a “silly” lawsuit, as Altman called it at the time, from plaintiff Iyo who alleged “io” – the Jony Ive design startup acquired by OpenAI for $6.5 billion in May 2025 – stole its concepts for a screenless, AI-powered voice-controlled earbud-form factor devices, the dynamic duo of Altman and Ive are finding out just how hard it can be to reinvent the personal computing wheel, in a manner of speaking.

Of course, no one said it was going to be easy, but when the whole world was led to believe that Sam Altman and his team at OpenAI had quietly started collaborating with Jony Ive and his company LoveForm for over two years on what both parties agreed and thought should be the first generation of native AI-embedded devices, surely I thought we would get to see some rudimentary concept of what it looked like before curtains fell on 2025.

I mean, Sam Altman and Jony Ive were so excited to tell the world about what they were working on, I didn’t think they’d make us all wait for more just to get a small glimpse of their fruits of labour. Wishful thinking, I know!

Turns out, it isn’t as easy as it looks, giving birth to a whole new class of devices that have never existed before. Given that it took the very first wireless telephone about 10 years to become from idea to concept, the first laptop give or take 3–4 years, and the iPhone about the same time, I thought creating new things would speed up a notch when we fast forward a couple of decades to the present. However, OpenAI DevDay 2025 has come and gone, but there are still quite a few technical challenges keeping whatever Sam Altman and Jony Ive are cooking well and truly in the oven for now.

Also read: From GPT-5 Pro to Sora 2: Every major announcement from OpenAI DevDay 2025

Among those hurdles are some formidable ones. Delivering advanced AI computation on-device is one of the biggest – packing enough processing power, energy efficiency, and low-latency performance into a screenless, earbud-sized form factor remains an unsolved engineering problem. Then, of course, there’s privacy. The idea of an always-on, always-listening device is as unnerving as it is futuristic, if you ask me. Ensuring conversations don’t inadvertently escape the user’s control will require privacy engineering that feels seriously difficult to pull off.

User experience, too, is proving a beast. Without a screen, the device has to rely purely on context – your voice, tone, ambient sound, even random noises you make – to interpret intent. That’s a tall order even for today’s most sophisticated multimodal models. Balancing how much intelligence lives locally versus in the cloud is another minefield, because if you think about it cloud processing introduces lag and privacy risks, while on-device AI demands hardware that doesn’t yet exist at scale. And perhaps most intriguingly, the team is said to be wrestling with how “personal” this assistant should feel – how proactive, how emotional, how much initiative it takes before crossing into uncanny valley territory.

Also read: From Canva to Coursera: Why so many apps are rushing into ChatGPT’s new app ecosystem

While hardware is famously difficult to get right, these reported issues suggest that the ambitious project still has a difficult path ahead. More than Sam Altman, it would be wise to pay attention to what Jony Ive has to say about the AI hardware project he’s working on closely with OpenAI – because unlike Altman, he actually does know a thing or two about designing and creating iconic tech products that have stood the test of time. Despite the challenges, Ive remained optimistic, stating in October 2025 that the devices could help people become “make us happy, less anxious and less disconnected.”

In the end, I suspect when Altman and Ive finally show us what they’ve been building, it won’t look like anything we’ve held before. Maybe it’ll whisper instead of shine, or disappear into the fabric of daily life, the way all great technology sometimes does. Until then, all we can do is wait – and pay attention to what’s coming.

Also read: After NVIDIA, OpenAI chooses AMD chips for ChatGPT’s future: But why?

Jayesh Shinde

Jayesh Shinde

Executive Editor at Digit. Technology journalist since Jan 2008, with stints at Indiatimes.com and PCWorld.in. Enthusiastic dad, reluctant traveler, weekend gamer, LOTR nerd, pseudo bon vivant. View Full Profile

Digit.in
Logo
Digit.in
Logo