Every product has a copilot now. Most are laughably bad. Worse, every co-pilot is behaving and formatted just a little bit differently in every app.
The core of the weirdness I believe is this; predictability.
Click any "AI Assistant" button and you're rolling the dice. Will it rewrite your entire document? Suggest a single word change? Completely misunderstand what you wanted? Nobody knows.
This breaks something fundamental about interface design.
Good UI design is about removing anxiety. When you see a red "Delete" button, you know exactly what happens next. The trash icon means trash. The save button saves. Users build mental models based on consistent, predictable behavior.
AI throws all of that out the window.
Every AI prompt is a black box. You type something in, cross your fingers, and hope the algorithm interprets your intent correctly. Sometimes it nails it. Sometimes it does something completely random. Sometimes it just fails silently.
We've created interfaces where the primary interaction is guessing.
This isn't just bad UX—it's the opposite of what interfaces should do. Instead of reducing cognitive load, AI features often increase it. Users could easily spend more mental energy trying to craft the perfect prompt than they would just doing the task manually.
Most "copilot" features feel like they were added because everyone else has one, not because they actually improve the user experience. They're checkbox features. Marketing bullets. Not tools that genuinely help people get work done.
The best AI implementations hide their complexity. They work predictably, even if the underlying technology is probabilistic. They feel like magic, not gambling.
But those are rare.
Most copilots are just expensive guessing games dressed up as innovation.