Prefetch data

A classic technique to work around slow access times is to speculatively fetch more data than needed, hoping for a linear access pattern. As we already learned, the CPU is doing that when accessing memory and the OS kernel does that when reading files from disk.

On mobile, we don't only have bigger network latencies, but we must also take the radio energy tail into account, so fetching a big file in chunks is an anti-pattern. When we have to access a big file on the server, such as a music or video file, the right way to do so is to download it as a whole and let the radio return to idle.

But we don't have to stop there. For the music file example, if the user is listening to a song from some album, we could prefetch the next one, but downloading the entire album would be overkill. However, when downloading a large video file, we should probably only prefetch an amount of video data likely to be viewed in the next couple of minutes.

So, we can see that prefetching at an application level isn't that straightforward as in the case of file I/O, because a user's behavior cannot be predicted that easily. The application could use some metrics, statistical models, or heuristics to anticipate what your user might need. In any case, designing a prefetching strategy leaves ample room for creativity.