Next, we try to find a suitable back-facing camera by scanning the list of available cameras on the device. A characteristics flag is given to the camera if it's back-facing, as follows:
CameraManager manager = (CameraManager) context.getSystemService(Context.CAMERA_SERVICE);
try {
String camList[] = manager.getCameraIdList();
mCameraID = camList[0]; // save as a class member - mCameraID
for (String cameraID : camList) {
CameraCharacteristics characteristics = manager.getCameraCharacteristics(cameraID);
if(characteristics.get(CameraCharacteristics.LENS_FACING) == CameraCharacteristics.LENS_FACING_BACK) {
mCameraID = cameraID;
break;
}
}
Log.i(LOGTAG, "Opening camera: " + mCameraID);
CameraCharacteristics characteristics = manager.getCameraCharacteristics(mCameraID);
manager.openCamera(mCameraID, mStateCallback, mBackgroundHandler);
} catch (...) {
/* ... */
}
When the camera is opened, we look through the list of available image resolutions and pick a good size. A good size will be something not too big, so calculation won't be lengthy, and a resolution that corresponds with the screen resolution, so it covers the entire screen:
final int width = 1280; // 1280x720 is a good wide-format size, but we can query the
final int height = 720; // screen to see precisely what resolution it is.
CameraCharacteristics characteristics = manager.getCameraCharacteristics(mCameraID);
StreamConfigurationMap map = characteristics.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP);
int bestWidth = 0, bestHeight = 0;
final float aspect = (float)width / height;
for (Size psize : map.getOutputSizes(ImageFormat.YUV_420_888)) {
final int w = psize.getWidth(), h = psize.getHeight();
// accept the size if it's close to our target and has similar aspect ratio
if ( width >= w && height >= h &&
bestWidth <= w && bestHeight <= h &&
Math.abs(aspect - (float)w/h) < 0.2 )
{
bestWidth = w;
bestHeight = h;
}
}
We're now ready to request access to the video feed. We will be requesting access to the raw data coming from the camera. Almost all Android devices will provide a YUV 420 stream, so it's good practice to target that format; however, we will need a conversion step to get RGB data, as follows:
mImageReader = ImageReader.newInstance(mPreviewSize.getWidth(), mPreviewSize.getHeight(), ImageFormat.YUV_420_888, 2);
// The ImageAvailableListener will get a function call with each frame
mImageReader.setOnImageAvailableListener(mHandler, mBackgroundHandler);
mPreviewRequestBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
mPreviewRequestBuilder.addTarget(mImageReader.getSurface());
mCameraDevice.createCaptureSession(Arrays.asList(mImageReader.getSurface()),
new CameraCaptureSession.StateCallback() {
@Override
public void onConfigured( CameraCaptureSession cameraCaptureSession) {
mCaptureSession = cameraCaptureSession;
// ... setup auto-focus here
mHandler.onCameraSetup(mPreviewSize); // notify interested parties
}
@Override
public void onConfigureFailed(CameraCaptureSession cameraCaptureSession) {
Log.e(LOGTAG, "createCameraPreviewSession failed");
}
}, mBackgroundHandler);
From this point on, our class that implements ImageReader.OnImageAvailableListener will be called with each frame and we can access the pixels:
@Override
public void onImageAvailable(ImageReader imageReader) {
android.media.Image image = imageReader.acquireLatestImage();
//such as getting a grayscale image by taking just the Y component (from YUV)
mPreviewByteBufferGray.rewind();
ByteBuffer buffer = image.getPlanes()[0].getBuffer();
buffer.rewind();
buffer.get(mPreviewByteBufferGray.array());
image.close(); // release the image - Important!
}
At this point, we can send the byte buffer for processing in OpenCV. Next up, we will develop the camera calibration process with the aruco module.