The text UI paradox

Is text supposedly the UI of the future? Chat interfaces are everywhere. We talk to ChatGPT, Claude, and countless AI assistants through text. The entire AI revolution runs on typing words into boxes.

So why is mobile absolutely terrible at text?

Think about it: when you need to write anything substantial, you reach for a laptop. Mobile keyboards are cramped, autocorrect is aggressive and wrong, and thumb-typing longer messages feels like punishment. We've optimized the world's most personal computing device for everything except the interface that's supposed to define our digital future.

Voice was meant to solve this. For decades, we've been promised that speaking to our devices would free us from keyboards. Apple launched Siri in 2011. Amazon's Alexa arrived in 2014. Google's been pushing voice for even longer.

But voice input still sucks.

It's not private — you can't dictate a sensitive email on the train. It's error-prone — try explaining technical concepts to voice recognition. And it's surprisingly low bandwidth. Reading is faster than listening. Typing (when you have a real keyboard) is faster than speaking. We all hate voice messages because they waste our time.

This creates a fundamental tension in computing. The interfaces we're now building assume text input. But the devices we carry make text input painful. Voice hasn't bridged that gap and likely won't anytime soon.

Maybe this points to something bigger: the future isn't actually text-first. Maybe text interfaces are just a bridge—a way to interact with AI until something better emerges. Or maybe we're heading toward a world where serious work happens on devices with real keyboards, while mobile becomes purely consumptive.

Either way, the contradiction is real. And it suggests our assumptions about the "text UI future" might need rethinking.

COPYRIGHT © 2009 - 2025 Stijn Bakker.
Built with ♡ in Deventer