Josh Clark of the design strategy firm Global Moxie is constantly searching for magic. Not the type that lets you conjure unseen servants out of thin air, or call down lightning bolts. Josh wants to craft experiences that take our breath away, illusions written in code that have equal parts panache and utility.
That search has let him to the Internet of Things, and thinking about ways to let humans and devices interact in amazing ways.
At Future Insights Live in June, Josh will show off the magic he's either found, or helped to create. Here is a lightly edited version on an interview I did with Josh about magic, technology and designing interactions for the Internet of Things.
Ian Murphy: What’s the inspiration behind your keynote at Future Insights Live, entitled “Mind the Gap: Designing in the Space Between Devices?”
Josh Clark: We’ve been putting all of our effort in the last few years into trying to get our information and services to look great on all these different devices. We’re getting better at it; we’re kind of getting the hang of it, and made that adjustment.
What’s happening now is that we’ve been so successful that people are using all of our services on a bunch of different devices, often simultaneously.
There was this crazy study - it was in the UK but I think it’s probably pretty representative of the US - of looking at people who have multiple gadgets, and how they spend an hour in the evening. On average, people changed devices 21 times in an hour, moving between phone and tablet and laptop. That’s 21 different changes, and 95 percent of them while the TV was on in the background.
Also there was this great Google study about 18 months ago that shows two-thirds of people who have more than one gadget complete shopping tasks across gadgets. So it’s something like start on the phone, finish on the laptop or on the tablet. So it’s like moving sequential tasks across gadgets.
The thing is, neither our services nor our interfaces are set up for that. So we wind up doing all these crazy hacks, like emailing ourselves constantly, or sending a text to the person sitting next to us. It seems like there is this real opportunity to start designing interactions between devices.
There is actually a lot of really cool magic you can do there as you start to think about how we can transfer not only content, but action and intent between devices, like transferring an activity midway through from one device to another. If you start designing for sensors in that way, you can actually have these really cool physical interactions, where you actually seem to fling information from one device to another, where they’re talking to each other in a sort of social way .
What this really means this is that we’re not really designing just for an individual screen, but for interaction between screens, and a lot of fun stuff happens that can really feel like magic. When you’re shaking information from one device to another, or you’re grabbing it from thin air and throwing it into another device. It’s a really exciting time for interaction design.
Is it hard for designers, people who have worked very visually, to wrap their heads around the space between devices?
Yes, it’s a brand new way to think about things. Touch devices introduced this shift a little bit, but for 30 years we had been designing information interfaces with strictly visual interfaces. Then touch came around, and it was like “oh, I can hold this thing in my hand, and I have to think about more than just how it looks, I have to think about how it feels in the hand.” Now we have to think about an opportunity where we can actually move the interface off the device itself. That’s really a new thing, it really gets into the areas of industrial, ergonomic and behavioral designs that are really different disciplines than visual design.
The exciting thing for me is that for a long time people have been talking about the Internet of Things. This is, on the one hand, about considering how we give physical objects a digital presence. But at the same time we’ve been coming to it from the other direction with mobile: how do we give digital interfaces a physical presence by etching them onto these slabs that we carry out into the world with us?
I really think it’s the two coming together that is going to create these really interesting opportunities as we put more and more smarts into everyday objects – that’s smartphone smarts into things. The everyday objects and places that we care about are themselves becoming devices that our devices can talk to. How do we interact with the clothes that we wear, with our homes, with our televisions, using mobile devices? Or what happens when we have those things talk to each other?
It’s a whole kind of different interaction design that doesn’t necessarily have to do with the screen, although that’s often where we’ll get some of our output.
Are you able to do any of this sort of work with clients, or so far is it in the realm of the possible for you?
I’ve done a little bit of client work in this area. One was a project called Asthmapolis [now Propeller Health], which is this really interesting project to help asthma patients understand how they’re controlling their asthma. The way it does that is to measure when you’re having an asthma attack. The way it does that is it has this tiny little sensor that goes onto your inhaler, and when you take a puff it has Bluetooth on it that talks to your phone or tablet, and it has a location and time stamp information. Now you’ve got this detail about when and where you’re having and asthma attack, and that alone can be helpful for the individual
But then what’s really interesting is they distribute these things through local clinics, and then they have a couple thousand of them in the individual community. Then you start to get this really interesting aggregate information, where you can see an hour after people go through this area in town people get these asthma attacks. It gives you epidemiological information about the causes of asthma in your community. That’s sort of one side of this internet of things thing, this path of data gathering.
On the other hand, something we haven’t explored as much is more intentional interfaces, like the grab magic hack where you grab an image out of thin air, and it’s an intentional wizard-like experience.
That’s something I’m playing with ab it in prototype, but not so much in client work. My studio mate Larry Legend and I put together a little app to experiment with this stuff where you’ve got something on your phone, and you want to move it over to your desktop mid-stream. For example, what I’ll show is that you’re listening to music on your phone, and you arrive at your desk, and want to just keep playing it on your desktop.
Yes, please develop that!
We’ve got it! You take your phone and just tap it twice on your computer, and it just starts playing on your computer in exactly the same place of the song. You just sort of drop it into your computer. We’ve got that with photos, and text, and maps, and URLs too, just double tap your phone on your computer and it just moves it right over.
It’s these kind of things that normally if I’ve got a URL or a map, I’ll email it to myself. You know, especially when you’re thinking about an iPhone and a MacBook Air, these things should be talking to each other. They’re from the same family. So it’s a physical interaction that works on my terms instead of the devices’ terms.
What do you hope that people will take away from your talk?
I think there are really two things. One is an awareness that people aren’t just using a single interface anymore. We tend to design around single interfaces, putting people into mobile, tablet, or desktop. But it turns out that people are constantly moving in between, so we have to design interfaces and services that help people leap that gap. They’re doing it anyway, but they’re hacking the system to do it.
The other thing is recognizing that a lot of the way we can do that is to design physical interfaces between those gadgets, and not just virtual or digital interfaces. How can we design for sensors to move those interactions off the screen and treating those interfaces as physical objects in our lives, not just as abstract pixels.