I dont want a 'Smart' Home, I want a Smarter Home

Living a simple, usable, connected life

Knocki, Amazon Echo, Google Home, Amazon Dash (and Button), the Ikea Smart Kitchen Concept. All of these products and ideas share one core value; making daily tasks available instantly and on demand. Putting what you need where you need it without any additional cognitive load. Each one of these products or concepts takes a different approach to the problem, but they are all essentially based on the idea of allowing you to do a task a little bit easier or faster. They may not be solving world-changing issues, but they do want to make your life a little bit easier.

Voice Assistants and Conversational UI

Amazon Echo and Google Home take the conversational approach. We all know how to talk to someone, so why not talk to our devices? “Alexa, turn on the lights.” “Okay Google, how long is my commute to work today?” These two phrases are both based on a simple task, getting up and turning on the lights, or checking your commute on a website or app. Neither task is particularly hard, but when you’re in the middle of something else, being able to quickly use a voice assistant can save valuable time. If I’m working on a project and it gets dark, getting up to turn on the lights may seem tedious to some, but the cost of context switching, and losing focus on what I’m doing could end up costing hours of work. The cost of context switching is a real and documented thing. You can read up on it here and here.

The commute question offers a more complex and interesting example that gives insight into how we might use technology down the road. First, there is the idea that your AI assistant has external context for the conversation. “my commute to work” I did not ask for specific directions from point A to point B. I did not even give an origin point. The phrasing of the question implies that I am at home, “my commute”, it also implies that the assistant knows where I work, or at least the location of a place that I have identified as “work”. Using a traditional mapping app or website I would need to input at least an end point, and either a starting point or allow my current GPS position as a starting point. I must then go through a series of interactions simply to get the duration of the trip and any potential traffic. If my assistant can do all of these things while I’m getting ready it saves me time, and again, allows me to focus on something that I have deemed more important at the moment.

Action at your fingertips, Tap and Click interfaces.

The Amazon Dash Button was a particularly unique product when it came to market. Many people didn’t quite understand the point of a physical devices that had a single purpose, click and instantly re-order an item from Amazon. Was there a real world use-case for such a specialized product? Did anyone really need this? It seemed like another tech bubble product that was designed to solve a problem that didn’t actually exist. Even if that was the case, it was an interesting experiment and opened the door for people to continue exploring the concept.

Amazon themselves expanded on the button’s concept when they released the Amazon Dash. This new product has a barcode scanner and voice recognition. It’s tied to AmazonFresh and allows the end user to quickly or say an item name and add it to their cart. Grocery delivery services are not new, PeaPod was founded in 1989, but having the ability to quickly scan a barcode and know that you’re getting the exact same product in a new innovation that makes the process that much easier.

Knocki is a new product being kickstarted right now. It brings automation to your fingertips, quite literally. The Knocki team is promising the ability to place their product on any surface and program it to understand a series of taps. You can then set the device to perform any “smart” function you would like. Place it under a coffee table, three taps to play/pause the television. Place it behind a door and it can send you a notification when someone knocks. The idea of the Knocki is particularly exciting to me because it is essentially promising the possibility of turning any surface into an invisible interface. As we teach about UX and building an unobtrusive UI, the best interface is one you don’t notice. Being able to tap on a table is exactly that. There is no interface, no buttons, no notifications, just you being able to accomplish a task when and where you want.

Smarter Living

In 1999 Disney came out with with the movie Smart House. This house could cook, clean, and provide anything the family needed. Often it feels like this is the goal we’re striving towards with all of the “smart home” and “home automation” technology, but it also feels like we’re missing the key point, it was all a single interface, the house’s AI.

The modern smart home could have any number of brands attached to it, Nest, Samsung, Philips, Amazon, and Google. All of this, and without some third party none of them really want to talk to each other. I don’t want 14 different apps or systems that I have to set up and control. I don’t want to have to fumble through an interface every time I want to change a lightbulb or plug in an appliance.

These new interface-less products and ideas are pushing us to rethink how we interact with our world. The smarter products get the less of a guided experience we will need to interact with them. You can ask Alexa to purchase something for you, you can quickly tap on your table to adjust your thermostat, one day you may be able to place ingredients on your kitchen counter and have it give you a recipe for dinner. In the future we may not need visual interfaces, these “smart” things are the stepping stones to that future. I look forward to what it holds.

As always, keep building better.

Adam Sedwick

I work on Design systems and Advocate for Accessibility on the web.

Tennessee

Blogging

Design Systems

Design Tokens

Web Accessibility

Web Design

Web Development

Open Web Standards