We need to make a few changes to CameraActivity
to conform with our changes to ImageDetectionFilter
and with the new interface provided by ARFilter
. We also need to modify the activity's layout so that it includes a GLSurfaceView
. The adapter for this GLSurfaceView
will be ARCubeRenderer
. The ImageDetectionFilter
and the ARCubeRenderer
methods will use CameraProjectionAdapter
to coordinate their projection matrices.
First, let's make the following changes to the member variables of CameraActivity
:
The remaining changes belong in the onCreate
method, where we should create and configure the instances of GLSurfaceView
, ARCubeRenderer
, and CameraProjectionAdapter
. The implementation includes some boilerplate code to overlay an instance of GLSurfaceView
atop an instance of NativeCameraView
. These two views are contained inside a standard Android layout widget called a FrameLayout
. After setting up the layout, we need a Camera
instance and a Camera.Parameters
instance in order to do our remaining configuration. The Camera
instance is obtained via a static method, Camera.open()
, which may take a camera index as an optional argument on Android 2.3 and later. (By default, the first rear-facing camera is used.) When we are done with the Camera
, we must call its release()
method in order to make it available later. The code is as follows:
Every call to Camera.open
must be paired with a call to the Camera
instance's release
method. Otherwise, our app and other apps may subsequently encounter a RuntimeException
while calling Camera.open
. For more details about the Camera
class, see the official documentation at http://developer.android.com/reference/android/hardware/Camera.html.
protected void onCreate(final Bundle savedInstanceState) { super.onCreate(savedInstanceState); // ... FrameLayout layout = new FrameLayout(this); layout.setLayoutParams(new FrameLayout.LayoutParams( FrameLayout.LayoutParams.MATCH_PARENT, FrameLayout.LayoutParams.MATCH_PARENT)); setContentView(layout); mCameraView = new NativeCameraView(this, mCameraIndex); mCameraView.setCvCameraViewListener(this); mCameraView.setLayoutParams(new FrameLayout.LayoutParams( FrameLayout.LayoutParams.MATCH_PARENT, FrameLayout.LayoutParams.MATCH_PARENT)); layout.addView(mCameraView); GLSurfaceView glSurfaceView = new GLSurfaceView(this); glSurfaceView.getHolder().setFormat( PixelFormat.TRANSPARENT); glSurfaceView.setEGLConfigChooser(8, 8, 8, 8, 0, 0); glSurfaceView.setZOrderOnTop(true); glSurfaceView.setLayoutParams(new FrameLayout.LayoutParams( FrameLayout.LayoutParams.MATCH_PARENT, FrameLayout.LayoutParams.MATCH_PARENT)); layout.addView(glSurfaceView); mCameraProjectionAdapter = new CameraProjectionAdapter(); mARRenderer = new ARCubeRenderer(); mARRenderer.cameraProjectionAdapter = mCameraProjectionAdapter; glSurfaceView.setRenderer(mARRenderer); final Camera camera; if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.GINGERBREAD) { CameraInfo cameraInfo = new CameraInfo(); Camera.getCameraInfo(mCameraIndex, cameraInfo); mIsCameraFrontFacing = (cameraInfo.facing == CameraInfo.CAMERA_FACING_FRONT); mNumCameras = Camera.getNumberOfCameras(); camera = Camera.open(mCameraIndex); } else { // pre-Gingerbread // Assume there is only 1 camera and it is rear-facing. mIsCameraFrontFacing = false; mNumCameras = 1; camera = Camera.open(); } final Parameters parameters = camera.getParameters(); mCameraProjectionAdapter.setCameraParameters( parameters); camera.release(); }
That's all! Run and test Second Sight
. When you activate on of the instance of ImageDetectionFilter
and hold the appropriate printed image in front of the camera, you should see a colorful cube rendered on top of the image. For example, see the following screenshot: