Will Zero UI Triumph?

At the moment, you might be hearing a lot about 'Zero UI'. I wouldn't be surprised if you're seeing the phrase 'a new paradigm' connected to it a lot too! But what is Zero UI? What does it mean for us? Let's take a look.


Zero UI

Zero UI


Zero UI means just that, there's no UI (user interface). There are no icons to click and no virtual keyboard to type on. In the era of Cortana and Siri, are they truly needed?

If you can tell your phone or computer what you want it to do, isn't that a straighter line to the same destination? If we can replace unlocking our phone with a fingerprint sensor and then just say 'Phone: call Mum', then that seems to be a lot simpler than having to type in a code or password, tap a button or two to get to the contacts, and then scroll through them to find Mum.

Anyone who's spent any time using a desktop or laptop computer will also be familiar with clicking and hotkeys. Anyone using a computer for even routine tasks is going to spend a great deal of their life clicking multiple times to get simple things done, or using hotkeys. It's precisely because of these annoyances that macros, where sequences of key press for example are recorded and 'replayed' by a single hotkey, became popular.

One argument that can be made in favour of UIs is that they help us abstract something that we don't understand or doesn't make any sense. Modern devices are already way beyond what most people understand. Maybe engineers will understand what's happening when we tell a device to refresh a page, but do you or I? I've been playing computer games ever since the Pacman era, but I'm well aware that I understand what I do with the software not what the software does to allow me to use it. However, there's the counter argument is that such abstraction indicates that the fundamental usability of the device is innately flawed and that the cracks are being papered over. Zero UI then could lead to these fundamental flaws being addressed.



We may be trying to run before we can walk though. Telling our phone to 'call Mum' is a really simple task, one that's a great deal more simplistic than what we do with a UI. Imagine saying to your voice controlled phone 'check the time, check phone credit, top-up, call Mum...' This is a simple sequence of tasks, but can you imagine Cortana or Siri keeping up with that stream though, let alone actually doing any of it? We know that our current systems of interaction work, we have the certainty that when we click or tap on something, the device in question will act in a predictable way. Would we feel the same way about trusting our phone to automatically access our bank account rather than manually controlling it through that process?

We should also remember that the screen is much more than a place for the UI and a place for us to view the device's output. The screen can be customised, meaning that it can become a place for downloaded files, short cuts, images, reminders etc. For example, one of the things on my desktop is an image of a friend. This file is far more than just a jpeg to me. It triggers memories of meals we've shared, laughter we've shared, the sadness of saying goodbye, the hope that it won't be long before we meet again, and memory of her faith in me. That's a lot of complexity for a single image, something that couldn't be encapsulated by even the most sophisticated of databases. It also highlights that we think in complex ways about simple things, not in the most efficient way possible. Our devices are supposed to be extensions of us, not us of them.

While we're on the subject of thought, relying on Zero UI fails to take into account those people who think in different ways. Maybe talking to your phone is a more natural way of interacting for a lot of people, but that doesn't mean that it's a more natural way for everyone. For example, the brother of Rosie King can't speak:

Can someone who's designing a Zero UI interface design one that will work for people who can't speak, or who are self-conscious as a result of high sensitivity, which is closely connected with autism spectrum? Can a Zero UI interface designer create a design that will work for anyone who, as a result of any condition, disorder, or syndrome, doesn't think in a typical way?

We could also run into social problems, not as awkward as the ones surrounding Google Glass, but more than anti-social enough to be annoying. Imagine you're having dinner with someone. The food's good, the wine's great, and the mood is special. Suddenly, your date yanks out their phone and starts waving their hands at it. Or imagine that you're on the perfect date, when someone at the next table grabs their phone and says loudly 'check the time, check phone credit, top-up, call Mum...' Even if we never need to worry about rogue AIs, perhaps we should wonder what will happen to us if the person we speak to most on a given day is our digital virtual assistant. Will the personality of this device and the oh-so-funny responses they can give delude us into thinking that we're getting social contact? Also, certain things in these assistants are hard-coded, meaning that what we get isn't what we need, but what the company that developed the assistant will permit their assistant to give us.



It's only time that will tell whether Zero UI is here to stay, or just another fad/paradigm. In the mean time the greatest barrier to its integration is simply stasis: we know that physical interaction with devices works and therefore have the certainty that devices will respond in a predictable way. With decades of experience in typing, clicking, or tapping behind us, Zero UI has got a lot of evolving to do. Not only does it need to develop in ways that enables it to sync with how we really think, we need to trust it.

Right now, Zero UI can be uncomfortable. There's no doubt that interaction with devices will change over time, but with Zero UI are we on the verge of a new breakthrough, or is this going to be a false dawn like voice command in cars?

What do you think of Zero UI? You can share your views in the comments.