Our main activity, CameraActivity
, needs to do the following:
MediaStore
so that it is accessible to apps such as Gallery
. Immediately open the photo in LabActivity
.We will use OpenCV functionality wherever feasible, even though we could just use the standard Android libraries to display a live camera feed, save a photo, and so on.
OpenCV provides an abstract class called CameraBridgeViewBase
, which represents a live camera feed. This class extends Android's SurfaceView
class, so that its instances can be part of the view hierarchy. Moreover, a CameraBridgeViewBase
instance can dispatch events to any listener that implements one of two interfaces, either CvCameraViewListener
or CvCameraViewListener2
. Often, the listener will be an activity, as is the case with CameraActivity
.
The CvCameraViewListener
and CvCameraViewListener2
interfaces provide callbacks for handling the start and stop of a stream of camera input, and for handling the capture of each frame. The two interfaces differ in terms of the image format. CvCameraViewListener
always receives an RGBA color frame, which is passed as an instance of OpenCV's Mat
class, a multidimensional array that may store pixel data. CvCameraViewListener2
receives each frame as an instance of OpenCV's CvCameraViewFrame
class. From the passed CvCameraViewFrame
, we may get a Mat
image in either RGBA color or grayscale format. Thus, CvCameraViewListener2
is the more flexible interface and it is the one we implement in CameraActivity
.
Since CameraBridgeViewBase
is an abstract class, we need an implementation. OpenCV provides two implementations, JavaCameraView
and NativeCameraView
. They are both Java classes but NativeCameraView
is a Java wrapper around a native C++ class. NativeCameraView
tends to yield a higher frame rate, so it is the implementation that we use in CameraActivity
.
To support interaction between OpenCV Manager and client apps, OpenCV provides an abstract class called BaseLoaderCallback
. This class declares a callback method that is executed after OpenCV Manager ensures that the library is available. Typically, this callback is the appropriate place to enable any other OpenCV objects such as the camera view.
Now that we know something about the relevant OpenCV types, let's open CameraActivity.java
, and add the following declarations of our activity class and its member variables:
public class CameraActivity extends FragmentActivity implements CvCameraViewListener2 { // A tag for log output. private static final String TAG = "MainActivity"; // A key for storing the index of the active camera. private static final String STATE_CAMERA_INDEX = "cameraIndex"; // The index of the active camera. private int mCameraIndex; // Whether the active camera is front-facing. // If so, the camera view should be mirrored. private boolean mIsCameraFrontFacing; // The number of cameras on the device. private int mNumCameras; // The camera view. private CameraBridgeViewBase mCameraView; // Whether the next camera frame should be saved as a photo. private boolean mIsPhotoPending; // A matrix that is used when saving photos. private Mat mBgr; // Whether an asynchronous menu action is in progress. // If so, menu interaction should be disabled. private boolean mIsMenuLocked; // The OpenCV loader callback. private BaseLoaderCallback mLoaderCallback = new BaseLoaderCallback(this) { @Override public void onManagerConnected(final int status) { switch (status) { case LoaderCallbackInterface.SUCCESS: Log.d(TAG, "OpenCV loaded successfully"); mCameraView.enableView(); mBgr = new Mat(); break; default: super.onManagerConnected(status); break; } } };
The concept of states (varying modes of operation) is central to Android activities and CameraActivity
is no exception. When the user selects a menu action to switch the camera or take a photo, the effects are not just instantaneous. Actions affect the work that must be done in subsequent frames. Some of this work is even done asynchronously. Thus, many member variables of CameraActivity
are dedicated to tracking the logical state of the activity.
Understanding asynchronous event collisions in Android
Many Android library methods such as startActivity()
do their work asynchronously. While the work is being carried out, the user may continue to use the interface, potentially initiating other work that is logically inconsistent with the first work.
For example, suppose that startActivity()
is called when a certain button is clicked. If the user presses the button multiple times, quickly, then more than one new activity may be pushed onto the activity stack. This behavior is probably not what the developer or user intended. A solution would be to disable the clicked button until its activity resumes. Similar considerations affect our menu system in CameraActivity
.
Like any Android activity, CameraActivity
also implements several callbacks that are executed in response to standard state changes, namely, changes in the activity lifecycle. Let's start by looking at the onCreate()
and onSaveInstanceState()
callbacks. These methods, respectively, are called at the beginning and end of the activity lifecycle. The onCreate()
callback typically sets up the activity's view hierarchy, initializes data, and reads any saved data that may have been written last time onSaveInstanceState()
was called.
For details about the Android activity lifecycle, see the official documentation at http://developer.android.com/reference/android/app/Activity.html#ActivityLifecycle.
In CameraActivity
, the onCreate()
callback sets up the camera view and initializes data about the cameras. It also reads any previous data about the active camera that has been written by onSaveInstanceState()
. Here are the implementations of the two methods:
@SuppressLint("NewApi") @Override protected void onCreate(final Bundle savedInstanceState) { super.onCreate(savedInstanceState); final Window window = getWindow(); window.addFlags( WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON); if (savedInstanceState != null) { mCameraIndex = savedInstanceState.getInt( STATE_CAMERA_INDEX, 0); } else { mCameraIndex = 0; } if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.GINGERBREAD) { CameraInfo cameraInfo = new CameraInfo(); Camera.getCameraInfo(mCameraIndex, cameraInfo); mIsCameraFrontFacing = (cameraInfo.facing == CameraInfo.CAMERA_FACING_FRONT); mNumCameras = Camera.getNumberOfCameras(); } else { // pre-Gingerbread // Assume there is only 1 camera and it is rear-facing. mIsCameraFrontFacing = false; mNumCameras = 1; } mCameraView = new NativeCameraView(this, mCameraIndex); mCameraView.setCvCameraViewListener(this); setContentView(mCameraView); } public void onSaveInstanceState(Bundle savedInstanceState) { // Save the current camera index. savedInstanceState.putInt(STATE_CAMERA_INDEX, mCameraIndex); super.onSaveInstanceState(savedInstanceState); }
Note that certain data about the device's cameras are unavailable on Froyo (the oldest Android version that we support). To avoid runtime errors, we check Build.VERSION.SDK_INT
before using the new APIs. Also, to avoid seeing unnecessary warnings in Eclipse, we add the @SuppressLint("NewApi")
annotation to the declaration of onCreate()
.
Several other activity lifecycle callbacks are also relevant to OpenCV. When the activity goes into the background (the onPause()
callback) or finishes (the onDestroy()
callback), the camera view should be disabled. When the activity comes into the foreground (the onResume()
callback), the OpenCVLoader should attempt to initialize the library. (Remember that the camera view is enabled once the library is successfully initialized.) Here are the implementations of the relevant callbacks:
@Override public void onPause() { if (mCameraView != null) { mCameraView.disableView(); } super.onPause(); } @Override public void onResume() { super.onResume(); OpenCVLoader.initAsync(OpenCVLoader.OPENCV_VERSION_2_4_3, this, mLoaderCallback); mIsMenuLocked = false; } @Override public void onDestroy() { super.onDestroy(); if (mCameraView != null) { mCameraView.disableView(); } }
Note that, in onResume()
, we re-enable menu interaction. We do this in case it was previously disabled while pushing a child activity onto the stack.
At this point, our activity has the necessary code to set up a camera view and get data about the device's cameras. Next, we should implement the menu actions that enable the user to switch the camera and request that a photo be taken. Again, there are relevant activity lifecycle callbacks such as onCreateOptionsMenu()
and onOptionsItemSelected()
. In onCreateOptionsMenu()
, we load our menu from its resource file. Then, if the device has only one camera, we remove the Next Cam menu item. In onOptionsItemSelected()
, we handle the Next Cam menu item by cycling to the next camera index and then recreating the activity. (Remember that the camera index is saved in onSaveInstanceState()
and restored in onCreate()
, where it is used to construct the camera view.) We handle the Take Photo menu item by setting a Boolean value, which we check in an OpenCV callback later. In either case, we block any further handling of menu options until the current handling is complete (for example, until onResume()
). Here is the implementation of the two menu-related callbacks:
@Override public boolean onCreateOptionsMenu(final Menu menu) { getMenuInflater().inflate(R.menu.activity_camera, menu); if (mNumCameras < 2) { // Remove the option to switch cameras, since there is // only 1. menu.removeItem(R.id.menu_next_camera); } return true; } @Override public boolean onOptionsItemSelected(final MenuItem item) { if (mIsMenuLocked) { return true; } switch (item.getItemId()) { case R.id.menu_next_camera: mIsMenuLocked = true; // With another camera index, recreate the activity. mCameraIndex++; if (mCameraIndex == mNumCameras) { mCameraIndex = 0; } recreate(); return true; case R.id.menu_take_photo: mIsMenuLocked = true; // Next frame, take the photo. mIsPhotoPending = true; return true; default: return super.onOptionsItemSelected(item); } }
Next, let's look at the callbacks that are required by the CvCameraViewListener2
interface. CameraActivity
does not need to do anything when the camera feed starts (the onCameraViewStarted()
callback) or stops (the onCameraViewStopped()
callback), but it may need to perform some operations whenever a new frame arrives (the onCameraFrame()
callback). First, if the user has requested a photo, one should be taken. (The photo capture functionality is actually quite complex, so we put it in a helper method, takePhoto()
, which we will examine later in this section.)
Second, if the active camera is front-facing (that is, user-facing), the camera view should be mirrored (horizontally flipped), since people are accustomed to looking at themselves in a mirror, rather than from a camera's true perspective. OpenCV's Core.flip()
method can be used to mirror the image. The arguments to, Core.flip()
are a source Mat
, a destination Mat
(which may be the same as the source), and an integer indicating whether the flip should be vertical (0
), horizontal (1
), or both (-1
). Here is the implementation of the CvCameraViewListener2
callbacks:
@Override public void onCameraViewStarted(final int width, final int height) { } @Override public void onCameraViewStopped() { } @Override public Mat onCameraFrame(final CvCameraViewFrame inputFrame) { final Mat rgba = inputFrame.rgba(); if (mIsPhotoPending) { mIsPhotoPending = false; takePhoto(rgba); } if (mIsCameraFrontFacing) { // Mirror (horizontally flip) the preview. Core.flip(rgba, rgba, 1); } return rgba; }
Now, finally, we are arriving at the function that will capture users' hearts and minds, or at least, their photos. As an argument, takePhoto()
receives an RGBA color Mat
that was read from the camera. We want to write this image to a disk, using an OpenCV method called Highgui.imwrite()
. This method requires an image in BGR or BGRA color format, so first we must convert the RGBA image, using the Improc.cvtColor()
method. Besides saving the image to a disk, we also want to enable other apps to find it via Android's MediaStore
. To do so, we generate some metadata about the photo and then, using a ContentResolver
object, we insert this metadata into MediaStore
and get back a URI.
If we encounter a failure to save or insert the photo or insert it, we give up and call a helper method, onTakePhotoFailed()
, which unlocks menu interaction and shows an error message to the user. On the other hand, if everything succeeds, we start LabActivity
and pass it the data it needs to locate the saved photo. Here is the implementation of takePhoto()
and onTakePhotoFailed()
:
private void takePhoto(final Mat rgba) { // Determine the path and metadata for the photo. final long currentTimeMillis = System.currentTimeMillis(); final String appName = getString(R.string.app_name); final String galleryPath = Environment.getExternalStoragePublicDirectory( Environment.DIRECTORY_PICTURES).toString(); final String albumPath = galleryPath + "/" + appName; final String photoPath = albumPath + "/" + currentTimeMillis + ".png"; final ContentValues values = new ContentValues(); values.put(MediaStore.MediaColumns.DATA, photoPath); values.put(Images.Media.MIME_TYPE, LabActivity.PHOTO_MIME_TYPE); values.put(Images.Media.TITLE, appName); values.put(Images.Media.DESCRIPTION, appName); values.put(Images.Media.DATE_TAKEN, currentTimeMillis); // Ensure that the album directory exists. File album = new File(albumPath); if (!album.isDirectory() && !album.mkdirs()) { Log.e(TAG, "Failed to create album directory at " + albumPath); onTakePhotoFailed(); return; } // Try to create the photo. Imgproc.cvtColor(rgba, mBgr, Imgproc.COLOR_RGBA2BGR, 3); if (!Highgui.imwrite(photoPath, mBgr)) { Log.e(TAG, "Failed to save photo to " + photoPath); onTakePhotoFailed(); } Log.d(TAG, "Photo saved successfully to " + photoPath); // Try to insert the photo into the MediaStore. Uri uri; try { uri = getContentResolver().insert( Images.Media.EXTERNAL_CONTENT_URI, values); } catch (final Exception e) { Log.e(TAG, "Failed to insert photo into MediaStore"); e.printStackTrace(); // Since the insertion failed, delete the photo. File photo = new File(photoPath); if (!photo.delete()) { Log.e(TAG, "Failed to delete non-inserted photo"); } onTakePhotoFailed(); return; } // Open the photo in LabActivity. final Intent intent = new Intent(this, LabActivity.class); intent.putExtra(LabActivity.EXTRA_PHOTO_URI, uri); intent.putExtra(LabActivity.EXTRA_PHOTO_DATA_PATH, photoPath); startActivity(intent); } private void onTakePhotoFailed() { mIsMenuLocked = false; // Show an error message. final String errorMessage = getString(R.string.photo_error_message); runOnUiThread(new Runnable() { @Override public void run() { Toast.makeText(CameraActivity.this, errorMessage, Toast.LENGTH_SHORT).show(); } }); } }
For now, that's everything we want CameraActivity
to do. We will expand this class in the following chapters, by adding more menu actions and handling them in the onCameraFrame()
callback.