You’ve hopefully realized by now that your markup for the mobile browser is the same code as the desktop browser. The main differences are the size of the viewport and how the user interacts with their device. On the desktop we use a keyboard and mouse, with a large screen and resizable browser. On touch devices, we use our chubby little fingers, sometimes on tiny little screens, in viewports that are generally not resizable.
Those were generalizations! I have a desktop computer with a 23-inch touchscreen. I also have a tablet with an external Bluetooth keyboard and mouse. All our web content needs to be accessible via touch and mouse on large monitors and tiny screens. Whenever we develop, we need to remember that not everyone is accessing our content in the same way.
When it comes to smaller viewports, we want the width of our site to be the width of the device. The default page rendering size for most mobile browsers is 980 px wide. That is generally not the width of the device.
Until @viewport
is supported
everywhere, we can use the viewport <meta>
tag. This tag is ignored by desktop
browsers:
<meta name="viewport" content="width=device-width;"/>
There are several possible values for the content attribute of the
viewport <meta>
tag. Unless you
are developing an interactive, time-sensitive game, this is the viewport
you should include. Your users should be allowed to scale the page up and
down. The preceding code allows them to zoom in, which is important for
accessibility.
In the case of CubeeDoo, we are creating a fullscreen, interactive,
time-sensitive game. We don’t want to allow the user to accidentally zoom
in or out. Unlike most other application types, when it comes to games, it
can be bad user experience when the board no longer fits neatly in the
window. Only if you have a good reason to not allow zooming (which we do
in the case of some games), should we consider it a good idea to prevent
zooming with the following viewport <meta>
tag:
<meta name="viewport" content="width=device-width; initial-scale=1.0; maximum-scale=1.0; minimum-scale: 1; user-scalable=0;"/>
This example is a little bit overkill. It reads, “Make the width of the viewport the same width of the device. Make that the initial size, the minimum allowable scaling size, and the maximum scaleable size, and don’t allow scaling.” I’ve included more content properties than I need to just to show the values. More reasonably, you can write:
<meta name="viewport" content="width=device-width; initial-scale=1.0; user-scalable=0;"/>
Generally, you will want to use width=device-width
. However, if your site is a
specific width for different breakpoints (and I don’t recommend this), you
can declare a specific width. For example, if your site’s medium
breakpoint design is exactly 550 px, you can write:
<meta name="viewport" content="width=550">
I can’t think of any time where declaring a single width for content is a good idea. Don’t do it. This is just to show you what code you may come across, and so you know how to filter out bad developers from your applicant pool.
The viewport <meta>
tag
is using HTML features to control presentation, which should
be the domain of CSS. Mixing presentation into your content layer isn’t
the right solution. However, it’s the only solution we have at the time
of this writing. The @viewport
at-rule is getting some support (Opera, IE10, and WebKit nightly
builds). Until @viewport
is more
widely supported, the viewport <meta>
is the solution.
We are focusing on mobile and are therefore only supporting modern browsers.
All modern browsers support the DOM addEventListener
method. Because we are on mobile (and making generalizations), we are
capturing touches rather than mouse movement and clicks.
Two of the main differences between touches and clicks are the size of the area throwing the event and the number of events that can be thrown simultaneously. Touch areas are much larger than mouse click areas: a finger is fat, while a mouse pointer is just a pixel. Also, touch devices support multitouch events, as a device can be touched with multiple fingers.
Different devices support different gestures and capture different numbers of fingers. The iPad, for example, can capture up to 11 fingers or touches at once. Standard mouse events don’t handle multiple clicks: a single mouse click produces a single click event in a single spot.
Every touch, whether done with a finger or a stylus, is a click event, but some devices will wait 300 to 500 milliseconds before reacting to the touch to ensure the gesture or touch is a single tap and not a double tap. We cover this in the next section.
Note that a finger is not as exact as a mouse pointer if you are using mouse/touch coordinates! A mouse can be very exact. A finger? Not so much.
Touch devices have unique features in terms of design and usability. With the same amount of effort, the user can access every pixel on the screen. The user uses their fingers for selection, which has a much bigger pointing radius than a mouse.
Your design needs to reflect these differences with larger hit zones and larger gutters between hit zones. The recommended height for buttons is 44 pixels, with a minimum height of 22 px, with 20 px of space between clickable areas and at minimum 10 px between these areas.
When the user touches the screen, the part under the finger and under the whole hand can be obscured. The user may be using her right hand or she may be using her left hand. Consider what might be hidden depending on which hand they’re using, and how important the content that is hidden under the user’s palm is.
The finger touching the screen and even parts of the hand may hide areas of the screen. Ensure that your labels are above their associated form field, and touch events don’t display temporarily visible dialogues at all. But if you must include a temporary pop-up, they need to be above the touch area, not to the side or below it.
There are a few finger gestures that are used by the operating system of the device, and not every operating system or device uses the same gestures. You should know what these are, especially when you are developing the user experience of your site. Some iOS devices use four-finger detection to switch between applications. You may also want to avoid gestures close to the edges of the viewport as several mobile devices move between windows, tabs, or applications when the user flicks or swipes from or to the edge of the screen. Keep all these native mobile OS features in mind when designing and developing your application and user interactions.
Mouse events make the Web work. It wouldn’t be a web if every document ever (well, almost every document ever) didn’t have clickable links leading to other documents. Games wouldn’t be games if you couldn’t interact with them. These interactions have generally been mouse clicks.
For the past 20 years or so, developers have been adding click events to their web pages. While we tap, touch, and tilt mobile devices, we don’t actually click our smartphones. With many mobile devices and some laptops, we can also tilt to interact. But basically, click events make the Web the Web. If touch devices didn’t support those ubiquitous mouse events, the mobile web would really be broken.
Because the Web is built on mouse events, mouse events work on touch devices—devices with no pointing devices. Mouse events are simulated after touch events are fired. Mouse events are thrown in an emulated environment, but the order of mouse event is not guaranteed. Every touch throws a click, mouse down, enter, exit, etc., but we can never be sure of the order in which they occur.
Every device emulates mouse events when using touch and provides us with other specific touch events we can capture. With touch events, there are two implementations we need to understand: (1) Apple’s touch and gesture events, an unfinished specification, which have been cancelled due to Apple’s patents; and (2) Microsoft’s pointer and gesture events, which are a newer, patent-free specification, which will become the standard and will soon be implemented in Chrome and Firefox. Standards and implementations are still evolving.
Corporations, most notably Apple, patent everything, including normal, everyday things like rounded corners and human interactions. Attempts have been made to make some gestures proprietary: Apple actually patented touch events. Specifications are open standards. So there was an issue. Pointer events to the rescue! Microsoft created their own version of events—pointer events—and offered those to the W3C to be used as the basis for the standard.[88]
Not to be confused with the CSS pointer-events
property, pointer events is
an event model for mouse cursor, pen, touch (including multitouch),
and all other pointing input devices. Similar to the JavaScript events
we’re so used to, like mouseover
and mouseout
, when supported we
will have the pointerdown
, pointerup
, pointercancel
, pointermove
, pointerover
, pointerout
, pointerenter
, and pointerleave
events. In addition to events
we can listen for, the device will capture details about touch or
pointing events such as touch size, type, pressure, and angle.
Currently, the only implementation of pointer events is in IE10 with
the Microsoft MS prefix, so pointermove
in IE10 is coded as MSpointermove
, and in IE11 sans
prefix.
As we all know, mice and fingers are different. When using a mouse, you
have a single pointer hovering, entering, exiting, and clicking on a
single pixel. Fingers not only tap larger areas, but people have five
of them. On each hand! The device and your event handlers need to keep
track of the number of fingers interacting with the screen. You can
create and handle sophisticated gestures by using the native touch and
mouse events in conjunction with preventDefault()
.
Until browser vendors agree upon and support the open standard of pointer events, we have touch events.
Touch devices and their browsers, including Android Browser,
Chrome, BlackBerry Browser, Opera, and Firefox, support the iOS
touchstart
, touchend
, touchmove
, and the sometimes-buggy touchcancel
events. The four events return a
TouchEvent
object, a changedTouches
collection, and the Touch
object.
The Touch
object is a
read-only object containing the coordinate properties of the touch
point, including touch coordinates pageX
, pageY
, screenX
, screenY
, clientX
, clientY
, the target
, and the identifier
. The TouchList
is the list of individual points
of contact for the touch event. The TouchEvent
object contains the touches
, targetTouches
, and changedTouches
collections, as well as the
Booleans altKey
, metaKey
, ctrlKey
, and shiftKey
.
Touch the device with one finger or a stylus, and a single event
is thrown. Touch with several fingers, and several events will be
thrown. When the screen is pressed, the touchstart
is thrown. When the finger moves
across the screen, the touchmove
event will be repeatedly thrown. When the pressure on the screen
ceases, the touchend
is fired. The
touchcancel
occurs when another
application, like an actual phone call, cancels the touch.
If your user is playing a game, listening to a podcast, or
watching a video clip, and the phone rings, does it make sense to
pause the game, stop the sound, or pause the video during the call? We
don’t want to upset our users by having the time run out and losing
the game every time they answer a call. In CubeeDoo, we pause the game
when the touchcancel
event is
fired:
document.addEventListener('touchcancel', function() { if (!qbdoo.game.classList.contains('paused')) { qbdoo.pauseGame(); } });
Touch devices support many gestures you may want to capture. Luke Wroblewski compiled the Touch Gesture Reference Guide, defining the various touch gestures by operating system. I recommend printing it and hanging it over your desk (next to the specificity chart from Appendix A).
If you are using the same code for both touch devices and desktop browsers, you will likely need to increase the touch area for links, and decrease the delay between a single touch and its event.
You might think that using media queries would be the way to go: smaller screens are likely mobile screens, and mobile screens are more likely to be touch screens. But then you have tablets, which can have higher resolutions than many laptops and small monitors.
Feature detection seems like a solution, but it’s not perfect. Touch feature detection detects whether the browser supports touch events, not whether the device does. You have to test with JavaScript to check for touch event properties:
var isTouchEnabled = 'ontouchstart' in window || 'createTouch' in document || (window.DocumentTouch && document instanceof DocumentTouch);
You can then use the isTouchEnabled
Boolean to handle
touch-capable and touch-incapable devices, remembering that some
devices and some users, like some feature phones and visual- or
motor-impaired users may not have any pointing devices.
To simulate single-touch events in your desktop development environment, try the Phantom Limb utility.
When you click with your finger there is no right-click event. Because of this, mobile devices react when you hold down your touch instead. Because there is no keyboard, mouse, or right-click, mobile browsers have some built-in behaviors.
There is no such thing as hover on a touch device. Because of
this, we have link tap highlight color that we can control with
tap-highlight-color
. You can style the highlight color to match your design.
While the value transparent
will
get rid of the oftentimes unsightly effect, remember that removing the
appearance of a tap effect negatively affects accessibility:
#content a { -webkit-tap-highlight-color: #bada55; } #board a { -webkit-tap-highlight-color: transparent; }
We don’t actually have any links in our board, but if we did,
this code would make the background of any link #bada55
on top, except for the links in the
game board, which would show no effect on tap, other than the card
flip effect, which is controlled separately.
When you touch and hold on text copy, or touch and drag, you may
have noticed the appearance of a selection dialogue allowing you to
copy or define the selected text. You can control this in WebKit
browsers with -webkit-user-select:
none;
. When user-select
is set to none
on a DOM node, like
a paragraph or even the <body>
, no copy/define selection
dialog will appear.
The pointer-events: none;
property/value pair is inappropriate in this setting. While it would
prevent the user from getting the copy/define dialogue, it would also
prevent any other touch events from occurring on the user-select
targeted DOM node.
Similar to the selection dialog , when a user touches and holds an image, an image
save/copy panel appears. Adding touch-callout: none;
to all images will
ensure that no image dialog appears when images are touched:
img { -webkit-touch-callout: none; }
For best user experience and accessibility, do not use the preceding CSS properties in content sites. These properties should be reserved for games and other entertainment, productivity, and tool applications.
You don’t want your users to accidentally pop-up an operating system menu.
With CSS you are able to disable panning. You don’t want to completely
disable panning all the time, but you can use touch-action: none;
to prevent accidental
panning if accidental panning is likely to occur:
.active #board { -ms-touch-action: none; /* disable panning */ }
You might be thinking, “Why not just use JavaScript’s preventDefault()
?” You could likely get that
to work. However, using the four CSS properties just covered performs
better than preventDefault()
. CSS
is almost always more performant than JavaScript. And, in this case,
there is up to a 400 ms lag in firing touch events, so it’s best to
prevent the panning, dialoging, etc., before it ever happens.
Because the device doesn’t know if you are going to do a single tap or double tap, there is a 300–500 ms delay after the first tap before the touch event is triggered. Touch-enabled browsers on touch devices will wait from 300 ms to 500 ms, depending on the device, from the time that you tap the screen to firing the click event. The reason for this is that the browser is waiting to see if you are actually performing a double tap. Because of this, you may want to usurp the first tap (not waiting to expire the delay between taps) with an event handler.
If you are making a call to the server or other slow process, provide feedback to the user that the touch has been accepted. Depending on the connection speed, a server response can take a long, long time. You want the user to know that something is indeed happening—that their action is being acted upon—if the server response to the action takes more than 100 to 200 ms.
In CubeeDoo, we aren’t making server calls, so we don’t need to add a “waiting” feature. However, we certainly don’t want to wait 300 ms before flipping the card when the user touches the screen. In the application we are developing we know that there is no double-click behavior that we want to handle, so waiting this long to start acting on the click is time wasted for users.
While usurping user interaction is something you want to
carefully consider before doing, in our example there is no reason
that a user would double-click: we don’t allow for zooming or have any
other double-click features. We capture the touches to the cards with
the touchend
event.
Making the browser react faster to touch events involves a bit of
JavaScript that allows the application to respond to touchend
events rather than click events.
Touchend
events are fired
immediately on touch end, so they are significantly faster than click
events, which wait the 300 to 500 milliseconds.[89]
We need to keep the onclick
handler
to the cards for browsers that don’t support touch events, but we
don’t want to handle a touchend
then fire off a click 300 ms later. If this were a button or link, we
would need to ensure we don’t accidentally run two events on the same
node by calling preventDefault
on
the touchstart
event. Calling preventDefault
on touchstart
events will stop clicks and
scrolling from occurring as a result of the current tap:
eventHandlers: function() { if ('ontouchstart' in window || 'createTouch' in document || (window.DocumentTouch && document instanceof DocumentTouch)) { qbdoo.btn_pause.addEventListener('touchend', qbdoo.pauseGameOrNewGame); qbdoo.btn_mute.addEventListener('touchend', qbdoo.toggleMute); qbdoo.clearScores.addEventListener('touchend', qbdoo.eraseScores); document.addEventListener('touchcancel', qbdoo.pauseGameOrNewGame); } qbdoo.btn_pause.addEventListener('click', qbdoo.pauseGameOrNewGame); qbdoo.btn_mute.addEventListener('click', qbdoo.toggleMute); qbdoo.clearScores.addEventListener('click', qbdoo.eraseScores); qbdoo.themeChanger.addEventListener('change', qbdoo.changeTheme); },
Another solution is to add click
and touchend
event listeners to the <body>
, listening on the capture
phase. When the event listener is invoked, you determine if the click
or tap was a result of a user interaction that was already handled. If
so, call preventDefault
and
stopPropagation
on it. Remember
that some desktops come with touch screens, so always include both
click and touch events, preventing the default click in case of
touch.
Our game doesn’t scroll. Generally, we have to touch the screen
to scroll, and when we let go the logic tells us there is the touchend
event.
Currently, when scrolling, the touchend
event is thrown in most mobile
browsers, with the exception of Chrome for Android. The specifications
don’t specify that touch events should be canceled when scrolling, but
that does make sense.
Chrome for Android behaves a little differently, and this
behavior is being added to the pointer events specification. The
specifications for touchevent
s
don’t deal with this issue, but pointer events will. When pointer
events are supported, scrolling, pinching, zooming, and other device
(versus page) interactions will throw a cancel event.
Different platforms also handle different gestures. Apple (iOS), Google (Android), and Microsoft (Windows) all support different gestures that provide more refined interactions.
Touch is one difference you’ll note in the mobile space. Touch isn’t reserved for mobile. There are more and more monitors on laptops and desktops and other devices that are accepting touch. The touch screen also isn’t the only new hardware feature that we can interact with. Depending on the operating system, device, and browser, using CSS, JavaScript, and HTML5, we can create browser applications that interact with system hardware in a way that used to be reserved for natively installed applications.
Most mobile devices include sensors from which we can access data
using JavaScript, including the accelerometer, magnetometer, and
gyroscope. To handle the orientation of the device, we have the deviceOrientation
event specification that provides us with three window events, detailed
in the following paragraphs.
The accelerometer measures acceleration or linear motion on three axes. Used to
detect motion, tilting, and shaking, it measures the acceleration force
in m/s2 that is applied to the device on all
three physical axes (x, y, and z), including the force of gravity. We
can handle devicemotion
for
accelerometer data detection:
window.addEventListener('devicemotion', function( ) { // add response to event here });
The magnetometer measures where the device is heading, like a compass, but doesn’t
necessarily point north. The magnetometer measures the strength of the
magnetic field in three dimensions, measuring the ambient geomagnetic
field for all three physical axes (x, y, z) in μT. The compassneedscalibration
event is thrown when
the device detects that the compass needs a calibration to improve data
accuracy. To calibrate, the user does a figure eight with the
device:
window.addEventListener('compassneedscalibration', function( ) { // add response to event here // generally telling the user to make a figure 8 with the device });
The gyroscope measures the device’s rate of rotation in rads per second around
each of the three physical axes (x, y, and z). Because it measures the
rate of rotation around a single axis based on angular momentum
excluding the force of gravity, the gyroscope can provide information on
the device’s rotation and orientation if you need to measure whether the
user is spinning or turning the device. We can capture deviceorientation
when supported:
window.addEventListener('deviceorientation', function( ) { // add response to event here });
Every time the user moves the device, the deviceorientation
event occurs, including the
properties alpha (0–360), beta (–90–90), and gamma (–180–180) for the
rotation of the device frame around its z-, x-, and y-axis,
respectively. The property measurements are generally relative to the
direction the device was held when the orientation was first obtained
making deviceorientation
useful for
relative movements from the original position.
We are not only able to figure out how the user is holding the device, but we can also determine what state the device is in. Is the device online? If so, what type of network connection does it have? Does the device have battery power left?
The Network API
exposes the navigator.connection.type
attribute with a
string value of unknown
, ethernet
, wifi
, 2g
,
3g
, 4g
, or none
. Some browsers returned integers or
constants for those values: WIFI
,
CELL
, CELL_3G
, CELL_2G
, CELL_4G
, and UNKNOWN
. The API returns the connection type
at the first connection. Devices, though, aren’t always connected to
the Internet, and connection types can change.
The newer API is based on the quality of the connection rather
than the connection type. Considering all the phone companies lie
about what connection they market, this makes even more sense than
just the logic of it all. This version isn’t as well supported, but
support is starting. In the newer spec, instead of type
, the navigator.connection
object exposes the
bandwidth
and metered
attributes and a change
event.
The navigator.connection.bandwidth
returns
0
if offline, infinite (unkown), or the number of
megabytes per second as a double. The navigator.connection.metered
property is
either true
or false
. If true, the user’s ISP is limiting
your user, and you should be careful with bandwidth usage. For
example, when supported, if the connection is metered, you could ask
the user if they want to disable images and set a cookie for
that.
The change event can be written as:
navigator.connection.addEventListener('change', function() { //handle event. Generally, check the bandwidth property });
Note that the connection object is still prefixed, and can be captured as:
var connection = navigator.connection || navigator.webkitConnection || navigator.mozConnection; if (connection.bandwidth != undefined) { if (connection.bandwidth <= 0) { // offline } else if (connection.bandwidth <= 1) { // Less than 1MB/s / Low quality connection } else if (connection.bandwidth > 1) { // More than 1MB/s / High quality connection } else { // unknown } } else { // API is not available }
The Battery Status API allows you to determine the current battery status
through the navigator.battery
object. When supported, you can determine whether the battery is
currently being charged with the Boolean navigator.battery.charging
property. The
navigator.battery.chargingTime
will
return, in seconds, the estimated time until the battery is fully
charged. The navigator.battery.dischargingTime
provides
the time, in seconds, until a system suspension. The float navigator.battery.level
, between 0 and 1, is
the battery level.
var percentBatteryLeft = navigator.battery.level * 100
We will also have the chargingchange
, chargingtimechange
, dischargingtimechange
, and levelchange
to capture.
Other APIs for mobile web applications include:
The Calendar, Messaging, Sensor, and System Information APIs have been shelved. The Device API Working Group maintains a list of the various API statuses.
On iOS devices, we can add <meta>
tags that enable us to create HTML web apps that appear fullscreen as a
native application would. Apple calls this type of application a
Web.app. When we create a Wep.app, if the user “installs” the web
application by adding the site icon to the home screen, when the user
accesses the site via the home screen icon, they we will have a
fullscreen experience. When accessing a web application with the correct
<meta>
tags via a home screen
icon, the browser UI will be hidden. In this way and with the HTML5 APIs
we covered in this book, at least on some devices, we can create
native-looking applications with an offline experience that competes
with any native application.
Sometimes faux native, as described earlier, is not enough. While each operating system requires native applications to be programmed in different programming languages, you can code in HTML5, CSS, and JavaScript, and convert your web application to a native application.
Hybrid applications are HTML5, CSS, and JavaScript-based applications that are converted, or compiled, into a native application, often simple using a fullscreen web view as the application container. Using the web technologies you already know, and the ones you learned in this book (HTML, CSS, JavaScript), we can package and compile our source code into a native application for the various device operating systems. Once it is a native application, we can distribute it in the various application stores.
Apache Cordova, formerly PhoneGap, is an open source project and native web application framework for multiple platforms. PhoneGap enables us to export our web applications into native applications for most or all mobile platforms.
PhoneGap enables us not just to package our web application as a native application for the various mobile operating systems, but also provides us with access to components of the device that may not yet be accessible via the browser.
For example, while getUserMedia()
is well supported in Google
Chrome on desktop, recording video from a browser is not fully
supported yet in the mobile space. PhoneGap allows us to program in
JavaScript what is not fully supported by the mobile browser. When we
export that web application into a hybrid application with PhoneGap,
the wrapper converts our JavaScript into native code, understandable
by the operating system, providing our hybrid application with access
to device features only currently supported in the native
space.
Adobe PhoneGap Build is a cloud-based Cordova compiler, so we don’t need to deal with the native SDK on our computers.
While Sencha Touch is a UI framework, since version 2.0 it includes a native packager for iOS and Android. It is available for both Windows or Mac development environments. The packager and other developer tools can be downloaded from the Sencha website.
The Appcelerator Titanium framework allows creating iOS, BlackBerry, and Android native web applications. Titanium provides a bridge, enabling you to use native UI components from JavaScript. Titanium converts JavaScript to native code during compiling. The free Appcelerator Titanium Studio IDE can be downloaded online.
Your primary development tools should include a desktop IDE and a desktop browser. The Chrome browser is a good first step in development: if it doesn’t work (aside from device-specific things like touch and calling) in Chrome, it won’t work on your phone either. While the desktop browser is your main tool when you are marking up and coding your application, the desktop browser is only to be used in development and primary testing: you must test your sites in multiple browsers on multiple devices.
Getting multiple devices can be an expensive endeavor. Realizing we haven’t all won the lottery, we have to be shameless because we can’t afford every device.
While you should still test in as many devices as possible of different sizes, browser versions, and operating systems under different network conditions, testing on every possible combination is not feasible. Mobile emulators provide for an easy, inexpensive testing solution. Testing on live mobile devices can be a slow and tedious process—a necessary process—but emulators can make debugging more bearable. When your code works on the desktop but doesn’t work in the emulator, it likely won’t work on the device either.
It’s much faster to test in the emulator than on a phone. Just remember, emulators are not mobile devices. They are similar to the device they are emulating, but they have different limitations: your mobile device likely has very limited memory. Your desktop, and therefore your emulator, has an abundance of RAM. There are many differences between an emulator and a real device, but the emulator does give a good starting point.
You still need to test on devices, and we haven’t resolved the lottery issue yet. In the meantime, you need access to real devices. If you can’t buy, borrow or steal the plethora of devices you need to adequately test a sampling of your likely user base.
Remember your current user base and possible user base are not necessarily the same thing. If you see you only have 1% mobile usage, it may be because the mobile experience for your site sucks. It may have nothing to do with who would use your site on their mobile devices if you provided a good mobile user experience. Your current mobile usage statistics only reflects the current experience of your site in the mobile space.
There are some problems with this testing approach, though. For one thing, there are hundreds of differences between real devices, and several platforms without emulation. Real device testing is mandatory.
It is impossible to test all browsers on all devices, or even to test a single browser on every device. There are too many devices, with new ones being released all the time.
I recommend getting a few different devices with different sizes, operating systems, memory constraints, and browsers. Obviously, you can’t get all of them. If possible, get at least one device in each operating system, including the most recent iOS in tablet, phone, or iPod touch versions, BlackBerry (preferably 10), Windows 8 phone or tablet, and at least two Android devices running 2.3 and 4+ or latest version.
You can’t have all of the devices, nor should you. But this sampling can give you a good range. The Android 2.3 is still being sold, which is why I recommend owning one. You can purchase used devices on eBay. They’re really, really cheap if they have cracked screens or the phone part is broken. All you need is the browser to work and be visible. You don’t need high-quality devices.
These devices cover your basic testing. You still want to test on more devices. If you can’t get Samsung, BlackBerry, Nokia, or Motorola to send you a free testing device, there is likely a device lab in your city or a remote device lab accessible online. Apple will likely never give away free devices, but if you don’t have any Apple devices, you likely have a friend who does.
Testing on a single device takes time. Testing on all devices is impossible. But you do need to test. You need to QA your code on a multitude of actual devices.
During development you can use tools to test mobile web applications in a manner that more accurately reflects the mobile environment, without having to laboriously check on actual devices. Simulators and emulators can be used as a first line of testing. They are discussed in Chapter 1.
You definitely want to make sure your site looks good and doesn’t fail to complete the tasks the user expects. We’ve already covered that. It’s not enough to make sure that your site looks good and functions. You need to make sure your site functions, or performs, well. Up next we look at performance.
[88] For more information, see http://blog.jquery.com/2012/04/10/getting-touchy-about-patents/.
[89] In Firefox and Chrome, if zooming is disabled, the click event fires immediately, and doesn’t wait the 500 ms.