Chapter 2. Working with Camera Frames

In this chapter, we focus on building a basic photo capture app, which uses OpenCV to capture frames of camera input. Our app will enable the user to preview, save, edit, and share photos. It will interface with other apps on the device, via Android's MediaStore and Intent classes. Thus, we will learn how to build bridges between OpenCV and standard Android. Subsequent chapters will expand our app, using more functionality from OpenCV.

Let's make an app that enables people to see new visual patterns, to animate and interact with these patterns, and to share them as pictures. The idea is simple and versatile. Anyone, from a child to a computer vision expert, can appreciate the patterns. Through the magic of computer vision on a mobile device, any user can more readily see, change, and share hidden patterns in any scene.

For this app, I chose the name Second Sight, a phrase that is sometimes used in mythology to refer to supernatural and symbolic visions.

At its core, Second Sight is a camera app. It will enable the user to preview, save, and share photos. Like many other camera apps, it will also let the user to apply filters to the preview and the saved photos. However, many of the filters will not be traditional photographic effects. For example, the more complex filters will enable the user to see stylized edges or even rendered objects that blend with the real scene (augmented reality).

For this chapter, we will just build the basic camera and sharing functions of Second Sight, without any filters. Our first version of the app will contain two activity classes named CameraActivity and LabActivity. The CameraActivity class will show the preview and provide menu actions so that the user may select a camera (if the device has multiple cameras) and take a photo. Then, the LabActivity class will open to show the saved photo and will provide menu actions so that the user may delete the photo, or send it to another app for editing or sharing.

To get a better sense of our goal, let's look at some screenshots. Our first version of CameraActivity will look as follows:

Designing our app – Second Sight

When the user presses the Take Photo menu item, the LabActivity class will open. It will look like the following screenshot:

Designing our app – Second Sight

When the user presses the Share menu item, an intent chooser (a dialog for choosing a destination app) will appear overtop the photo, as in the following screenshot:

Designing our app – Second Sight

For example, by pressing the Google+ tile, the user could open the photo in the Google+ app, in order to share it over the social network. Thus, we have a complete usage example, where the user can snap a photo and share it, using just a few touch interactions.