Adding 3D tracking and rendering to CameraActivity

We need to make a few changes to CameraActivity to conform with our changes to ImageDetectionFilter and with the new interface provided by ARFilter. We also need to modify the activity's layout so that it includes a GLSurfaceView. The adapter for this GLSurfaceView will be ARCubeRenderer. The ImageDetectionFilter and the ARCubeRenderer methods will use CameraProjectionAdapter to coordinate their projection matrices.

First, let's make the following changes to the member variables of CameraActivity:

  // The filters.
  private ARFilter[] mImageDetectionFilters;
  private Filter[] mCurveFilters;
  private Filter[] mMixerFilters;
  private Filter[] mConvolutionFilters;

  // ...

  // The camera view.
  private CameraBridgeViewBase mCameraView;

  // An adapter between the video camera and projection matrix.
  private CameraProjectionAdapter mCameraProjectionAdapter;

  // The renderer for 3D augmentations.
  private ARCubeRenderer mARRenderer;

As usual, once the OpenCV library is loaded, we need to create the filters. The only changes are that we need to pass an instance of CameraProjectionAdapter to each constructor of ImageDetectionFilter, and we need to use a NoneARFilter in place of a NoneFilter. The code is as follows:

    public void onManagerConnected(final int status) {
      switch (status) {
        case LoaderCallbackInterface.SUCCESS:
        Log.d(TAG, "OpenCV loaded successfully");
        mCameraView.enableView();
        mBgr = new Mat();

        final ARFilter starryNight;
        try {
          starryNight = new ImageDetectionFilter(
            CameraActivity.this,
              R.drawable.starry_night,
                mCameraProjectionAdapter);
        } catch (IOException e) {
          Log.e(TAG, "Failed to load drawable: " +
            "starry_night");
          e.printStackTrace();
          break;
          }

          final ARFilter akbarHunting;
          try {
            akbarHunting = new ImageDetectionFilter(
              CameraActivity.this,
                R.drawable.akbar_hunting_with_cheetahs,
                  mCameraProjectionAdapter);
          } catch (IOException e) {
            Log.e(TAG, "Failed to load drawable: " +
              "akbar_hunting_with_cheetahs");
            e.printStackTrace();
            break;
          }

          mImageDetectionFilters = new ARFilter[] {
            new NoneARFilter(),
            starryNight,
            akbarHunting
          };

      // ...
    }
  }

The remaining changes belong in the onCreate method, where we should create and configure the instances of GLSurfaceView, ARCubeRenderer, and CameraProjectionAdapter. The implementation includes some boilerplate code to overlay an instance of GLSurfaceView atop an instance of NativeCameraView. These two views are contained inside a standard Android layout widget called a FrameLayout. After setting up the layout, we need a Camera instance and a Camera.Parameters instance in order to do our remaining configuration. The Camera instance is obtained via a static method, Camera.open(), which may take a camera index as an optional argument on Android 2.3 and later. (By default, the first rear-facing camera is used.) When we are done with the Camera, we must call its release() method in order to make it available later. The code is as follows:

  protected void onCreate(final Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);

    // ...

    FrameLayout layout = new FrameLayout(this);
    layout.setLayoutParams(new FrameLayout.LayoutParams(
      FrameLayout.LayoutParams.MATCH_PARENT,
        FrameLayout.LayoutParams.MATCH_PARENT));
    setContentView(layout);

    mCameraView = new NativeCameraView(this, mCameraIndex);
    mCameraView.setCvCameraViewListener(this);
    mCameraView.setLayoutParams(new FrameLayout.LayoutParams(
      FrameLayout.LayoutParams.MATCH_PARENT,
        FrameLayout.LayoutParams.MATCH_PARENT));
    layout.addView(mCameraView);

    GLSurfaceView glSurfaceView = new GLSurfaceView(this);
    glSurfaceView.getHolder().setFormat(
      PixelFormat.TRANSPARENT);
    glSurfaceView.setEGLConfigChooser(8, 8, 8, 8, 0, 0);
    glSurfaceView.setZOrderOnTop(true);
    glSurfaceView.setLayoutParams(new FrameLayout.LayoutParams(
      FrameLayout.LayoutParams.MATCH_PARENT,
        FrameLayout.LayoutParams.MATCH_PARENT));
    layout.addView(glSurfaceView);

    mCameraProjectionAdapter = new CameraProjectionAdapter();

    mARRenderer = new ARCubeRenderer();
    mARRenderer.cameraProjectionAdapter =
      mCameraProjectionAdapter;
    glSurfaceView.setRenderer(mARRenderer);

    final Camera camera;
    if (Build.VERSION.SDK_INT >=
      Build.VERSION_CODES.GINGERBREAD) {
      CameraInfo cameraInfo = new CameraInfo();
      Camera.getCameraInfo(mCameraIndex, cameraInfo);
      mIsCameraFrontFacing =
        (cameraInfo.facing ==
          CameraInfo.CAMERA_FACING_FRONT);
      mNumCameras = Camera.getNumberOfCameras();
      camera = Camera.open(mCameraIndex);
    } else { // pre-Gingerbread
      // Assume there is only 1 camera and it is rear-facing.
      mIsCameraFrontFacing = false;
      mNumCameras = 1;
      camera = Camera.open();
    }
    final Parameters parameters = camera.getParameters();
    mCameraProjectionAdapter.setCameraParameters(
      parameters);
    camera.release();
  }

That's all! Run and test Second Sight. When you activate on of the instance of ImageDetectionFilter and hold the appropriate printed image in front of the camera, you should see a colorful cube rendered on top of the image. For example, see the following screenshot:

Adding 3D tracking and rendering to CameraActivity