COMPUTING HAS ALWAYS BEEN an arm's length activity. For decades, we've used software from a distance, mediated by screen and mouse and keyboard—work by remote control. The touchscreen collapses this awkward model, closing the distance to make computing at once more immediate and personal. Just touch the content you want to work with: it's a direct and decidedly more humane approach to working with personal gadgets. Because touch interfaces lean on interactions we know from nudging and poking real-world objects, they lend themselves to more naturally intuitive experiences.
The affection of adoring iPhone owners aside, however, it's not all cooing and caresses. Even as this new generation of oh-so-personal computers clears away some dusty conventions of computing, new opportunities create their own dilemmas. Not only can you drag, flick, and pinch the iPhone's virtual objects, but the touchscreen enables a broad range of gestures—the onscreen shorthand of taps and swipes that make your iPhone do your bidding. Some of those gestures are immediately evident (tap a button to "push" it), and others are quickly discovered (swipe a screen to move to the next, or pinch to zoom in and out). But other gestures, especially those that don't borrow from familiar physical interactions, aren't as easy to guess, and some multifinger gestures are just plain awkward. A tapworthy interface provides savvy, gesture-based shortcuts but also strives to make those gestures easy to discover and use. It's a balance that can be more complex than it sounds.
This chapter explores how tapworthy design provides cues and feedback to reinforce gestures. You'll learn the gestures you can expect your audience to figure out right away and what you can do to help them discover new gestures on their own. You'll even find out how you can make life easier by making certain gestures harder, protecting against accidental mistaps in the bargain.
Some things are just hard to spot without help. How does Wonder Woman remember where she parked her invisible plane? How do you find the edge on a roll of scotch tape? So it goes with touchscreen gestures. Little more than a flick of the wrist and a smudge on the screen, a gesture is unlabeled and invisible. People rely on visual clues or past experience to figure out when and how gestures might apply.
The most obvious gestures are those that directly manipulate onscreen objects: tapping buttons, dragging objects, or sliding a list up or down. As you learned in Keep It Real, tapworthy design invites touch with clear tap targets that encourage exploration. Obvious visual controls make for obvious gestures. That's the guiding principle for desktop software, too, where buttons, handles, scroll bars, and changing cursor shapes guide both eye and mouse to tell us what to do and what to expect. Unlike desktop interfaces, however, touchscreens create the expectation that you can work not only on individual buttons but also on the screen's canvas itself. Navigating iPhone screens and lists reinforces this expectation through similarity to real-world gestures: flipping through the screens of the Weather app feels like dealing cards one-handed and flicking a picker wheel is just like spinning the numbers of a combination lock.
Gestures become less obvious when they're not specifically focused on selecting or moving a discrete interface element. The less a gesture resembles something you'd do to a physical object, the less likely it is for people to figure it out on their own. The pinch gesture for zooming in and out, for example, is not immediately obvious, but it instantly makes sense once you see it. It's still a gesture that's focused on direct manipulation of the onscreen image—it feels like stretching and squeezing—and it's unforgettable going forward. More abstract gestures, however, tend to go overlooked. Even experienced iPhone users often don't realize they can zoom out in the Maps app by tapping once with two fingers; it's not a gesture that would have any obvious effect on a physical object and so most people never think to try it. They're unlikely to discover it unless they're specifically told about it. (In general, tapping with two fingers is something users rarely try since it feels imprecise to paw the screen with two digits.)
By contrast, most newcomers do discover that tapping twice with one finger zooms in on the map. While that gesture doesn't have a real-world equivalent either, our double-click training from desktop computers kicks in. Double-clicking in Google Maps on the full desktop website zooms in, and it's common sense to think that it would work the same way on the iPhone with a double-tap. The most discoverable gestures work from experience: if it's something you'd do to a physical object or try with a mouse-driven cursor, it's something people will try on the touchscreen, too.