With the sparse 3D point cloud and the positions of the cameras, we can proceed with dense reconstruction using MVS. We already learned the basic concept of MVS in the first section; however, we do not need to implement this from scratch, but rather we can use the OpenMVS project. To use OpenMVS for cloud densifying, we must save our project in a specialized format. OpenMVS provides a class for saving and loading .mvs projects, the MVS::Interface class, defined inĀ MVS/Interface.h.
Let's start with the camera:
MVS::Interface interface;
MVS::Interface::Platform p;
// Add camera
MVS::Interface::Platform::Camera c;
c.K = Matx33d(K_); // The intrinsic matrix as refined by the bundle adjustment
c.R = Matx33d::eye(); // Camera doesn't have any inherent rotation
c.C = Point3d(0,0,0); // or translation
c.name = "Camera1";
const Size imgS = images[imagesFilenames[0]].size();
c.width = imgS.width; // Size of the image, to normalize the intrinsics
c.height = imgS.height;
p.cameras.push_back(c);
When adding the camera poses (views), we must take care. OpenMVS expects to get the rotation and center of the camera, and not the camera pose matrix for point projection . We therefore must translate the translation vector to represent the center of the camera by applying the inverse rotationĀ :
// Add views
p.poses.resize(Rs.size());
for (size_t i = 0; i < Rs.size(); ++i) {
Mat t = -Rs[i].t() * Ts[i]; // Camera *center*
p.poses[i].C.x = t.at<double>(0);
p.poses[i].C.y = t.at<double>(1);
p.poses[i].C.z = t.at<double>(2);
Rs[i].convertTo(p.poses[i].R, CV_64FC1);
// Add corresponding image (make sure index aligns)
MVS::Interface::Image image;
image.cameraID = 0;
image.poseID = i;
image.name = imagesFilenames[i];
image.platformID = 0;
interface.images.push_back(image);
}
p.name = "Platform1";
interface.platforms.push_back(p);
After adding the point cloud to the Interface as well, we can proceed with the cloud densifying in the command line:
$ ${openMVS}/build/bin/DensifyPointCloud -i crazyhorse.mvs
18:48:32 [App ] Command line: -i crazyhorse.mvs
18:48:32 [App ] Camera model loaded: platform 0; camera 0; f 0.896x0.896; poses 7
18:48:32 [App ] Image loaded 0: P1000965.JPG
18:48:32 [App ] Image loaded 1: P1000966.JPG
18:48:32 [App ] Image loaded 2: P1000967.JPG
18:48:32 [App ] Image loaded 3: P1000968.JPG
18:48:32 [App ] Image loaded 4: P1000969.JPG
18:48:32 [App ] Image loaded 5: P1000970.JPG
18:48:32 [App ] Image loaded 6: P1000971.JPG
18:48:32 [App ] Scene loaded from interface format (11ms):
7 images (7 calibrated) with a total of 5.25 MPixels (0.75 MPixels/image)
1557 points, 0 vertices, 0 faces
18:48:32 [App ] Preparing images for dense reconstruction completed: 7 images (125ms)
18:48:32 [App ] Selecting images for dense reconstruction completed: 7 images (5ms)
Estimated depth-maps 7 (100%, 1m44s705ms)
Filtered depth-maps 7 (100%, 1s671ms)
Fused depth-maps 7 (100%, 421ms)
18:50:20 [App ] Depth-maps fused and filtered: 7 depth-maps, 1653963 depths, 263027 points (16%%) (1s684ms)
18:50:20 [App ] Densifying point-cloud completed: 263027 points (1m48s263ms)
18:50:21 [App ] Scene saved (489ms):
7 images (7 calibrated)
263027 points, 0 vertices, 0 faces
18:50:21 [App ] Point-cloud saved: 263027 points (46ms)
This process might take a few minutes to complete. However, once it's done, the results are very impressive. The dense point cloud has a whopping 263,027 3D points, compared to just 1,557 in the sparse cloud. We can visualize the dense OpenMVS project using the Viewer app bundled in OpenMVS:
OpenMVS has several more functions to complete the reconstruction, such as extracting a triangular mesh from the dense point cloud.