Exercises

There are sample code routines in the …/opencv/samples/c/ directory that demonstrate many of the algorithms discussed in this chapter:

  1. The covariance Hessian matrix used in cvGoodFeaturesToTrack() is computed over some square region in the image set by block_size in that function.

  2. Refer to Figure 10-2 and consider the function that implements subpixel corner finding, cvFindCornerSubPix().

    1. What would happen if, in Figure 10-2, the checkerboard were twisted so that the straight dark-light lines formed curves that met in a point? Would subpixel corner finding still work? Explain.

    2. If you expand the window size around the twisted checkerboard's corner point (after expanding the win and zero_zone parameters), does subpixel corner finding become more accurate or does it rather begin to diverge? Explain your answer.

  3. Optical flow

    1. Describe an object that would be better tracked by block matching than by Lucas-Kanade optical flow.

    2. Describe an object that would be better tracked by Lucas-Kanade optical flow than by block matching.

  4. Compile lkdemo.c. Attach a web camera (or use a previously captured sequence of a textured moving object). In running the program, note that "r" autoinitializes tracking, "c" clears tracking, and a mouse click will enter a new point or turn off an old point. Run lkdemo.c and initialize the point tracking by typing "r". Observe the effects.

    1. Now go into the code and remove the subpixel point placement function cvFindCornerSubPix(). Does this hurt the results? In what way?

    2. Go into the code again and, in place of cvGoodFeaturesToTrack(), just put down a grid of points in an ROI around the object. Describe what happens to the points and why.

      Hint: Part of what happens is a consequence of the aperture problem—given a fixed window size and a line, we can't tell how the line is moving.

  5. Modify the lkdemo.c program to create a program that performs simple image stabilization for moderately moving cameras. Display the stabilized results in the center of a much larger window than the one output by your camera (so that the frame may wander while the first points remain stable).

  6. Compile and run camshiftdemo.c using a web camera or color video of a moving colored object. Use the mouse to draw a (tight) box around the moving object; the routine will track it.

    1. In camshiftdemo.c, replace the cvCamShif() routine with cvMeanShift(). Describe situations where one tracker will work better than another.

    2. Write a function that will put down a grid of points in the initial cvMeanShift() box. Run both trackers at once.

    3. How can these two trackers be used together to make tracking more robust? Explain and/or experiment.

  7. Compile and run the motion template code motempl.c with a web camera or using a previously stored movie file.

    1. Modify motempl.c so that it can do simple gesture recognition.

    2. If the camera was moving, explain how to use your motion stabilization code from exercise 5 to enable motion templates to work also for moderately moving cameras.

  8. Describe how you can track circular (nonlinear) motion using a linear state model (not extended) Kalman filter.

    Hint: How could you preprocess this to get back to linear dynamics?

  9. Use a motion model that posits that the current state depends on the previous state's location and velocity. Combine the lkdemo.c (using only a few click points) with the Kalman filter to track Lucas-Kanade points better. Display the uncertainty around each point. Where does this tracking fail?

    Hint: Use Lucas-Kanade as the observation model for the Kalman filter, and adjust noise so that it tracks. Keep motions reasonable.

  10. A Kalman filter depends on linear dynamics and on Markov independence (i.e., it assumes the current state depends only on the immediate past state, not on all past states). Suppose you want to track an object whose movement is related to its previous location and its previous velocity but that you mistakenly include a dynamics term only for state dependence on the previous location—in other words, forgetting the previous velocity term.

    1. Do the Kalman assumptions still hold? If so, explain why; if not, explain how the assumptions were violated.

    2. How can a Kalman filter be made to still track when you forget some terms of the dynamics?

      Hint: Think of the noise model.

  11. Use a web cam or a movie of a person waving two brightly colored objects, one in each hand. Use condensation to track both hands.