- Define the image plane coordinates and pixel coordinates associated with camera measurements of robotic system location and orientation.
- Derive the interaction matrix relating the derivatives of image plane coordinates to the velocity of the origin of the camera frame and the angular velocity of the camera frame in the inertial frame.
- State the image based visual servo control problem and discuss its stability.
- Derive computed torque controllers in terms of task space coordinates.
- Derive task space controllers for visual servo problems.
Modernrobotic control systems utilize a wide variety ofsensors to measure the configuration and motion of arobotic system during its operation. These sensors may includeaccelerometers,rate gyros,magnetometers,angle encoders, orglobal positioning system (GPS) sensors, to name a few. This section will discuss a basiccamera model that can be used to represent many of the cameras that are used in robotics for various control tasks. The simplest camera model that is applicable to many of the commercially available cameras is based on thepinhole camera orperspective projection camera.
Figure 7.2 Perspective pinhole camera, coordinate calculation.
Figure 7.3 Perspective pinhole camera, front projected.
As shown in Figure 7.1, when a classical pinhole camera creates an image of a point
, a ray is traced from the point
through the pinhole located at the origin of thecamera frame
onto thefocal plane. Thefocal length
is the distance from the origin of the camera frame to the focal plane. This chapter denotes the basis for the camera frame
as
. By convention theline‐of‐sight of the camera is along the
axis. Thecamera coordinates
of afeature point
viewed from a camera are defined in terms of theposition vector
of the point
in the camera frame
,
The coordinates
of theimage point in the focal plane are known as the thecanonical image plane coordinates,retinal coordinates, orcalibrated coordinates. As shown in Figure 7.1, the location of the the focal plane on the opposite side of the point
being viewed implies that the image in the focal plane is inverted. Figure 7.2 shows a full view of the
–
plane in this case. The relationship between the camera coordinates
and the focal plane coordinates
can be derived by considering similar triangles in Figure 7.2. Knowing the focal length
between the focal plane and camera frame, the relationship is
It is common practice to replace the physically motivated geometry in Figure 7.1 with a mathematical model that places the focal plane between the origin of the camera frame and point
being viewed, as shown in
Figure 7.3. While this arrangement is not physically realizable, it is used so that the image is not inverted in the focal plane. In this mathematical model, the negative signs in Equation (7.2) do not appear, and the relationship between the focal plane coordinates
and the camera coordinates
is simply
and
. These equations can be rewritten in terms of thehomogeneous coordinates of the image point in the focal plane and the camera frame coordinates as
Equation (7.2) relates the focal plane coordinates
to the camera coordinates
, but it is often desired to understand how the coordinates of the point
in the inertial 0 frame are related to the focal plane coordinates. The tools derived in Chapters and 3 can be used to obtain the desired expression. Recall that the homogeneous coordinates of the point with respect to the camera
frame are defined as
. These coordinates are related to the inertial frame 0 using thehomogeneous transform
,
The final homogeneous transformation that relates the inertial frame 0 coordinates and the focal plane coordinates is achieved by combining Equations (7.2) and (7.3),
Most modern cameras are comprised of sensors that make measurements using charge coupled device, or CCD, arrays. Cameras based on CCD arrays return a two dimensional matrix of intensity values, in contrast to the mathematical abstraction used in the last section that considered the image as a continuously varying intensity over the focal plane. Thepixel coordinates are used to represent the locations of the entries in the CCD array of intensity values. The pixel coordinates are introduced in a two step process. First, because the individual pixel elements in the CCD array can have different dimensions in the horizontal and vertical directions, thescaled coordinates
are defined in terms of thefocal coordinates
via the simple diagonal matrix equation
where the parameters
are scale factors in the
and
directions, respectively.
In addition to scaling, it is also common that the pixels in a CCD array are numbered from left to right, starting at the upper left corner of the pixel array. We account for this offset and define the pixel coordinates by shifting the origin with respect to the scaled coordinates,
where
is the location of theprincipal point or the image of the line‐of‐sight of the camera in the CCD array. The process of scaling and translating to obtain the pixel coordinates in terms of the focal plane coordinates can be written by combining Equations (7.5) and (7.6) in terms of a single transformation
Before closing this section, several variants of the equations that relate the camera coordinates,inertial coordinates, focal plane coordinates, and pixel coordinates are derived. These equations introduce theintrinsic orcalibration parameter matrix. With Equation (7.7), Equation (7.4) can be used to relate the pixel coordinates and inertial coordinates of the image point
in
Note that these equations have introduced another scalar parameter,
, that measures how pixels in the CCD array are sheared relative to the vertical and horizontal directions. Combining the focal length with the calibration constants
results in
in which thecamera intrinsic parameter matrix orcalibration matrix may be defined as
,
By defining theprojection matrix
that extracts the first three coordinates from any 4
of homogeneous coordinates, a succinct rule that defines the pixel coordinates in terms of the camera coordinates is written as
For completeness the relationships that use the intrinsic parameter matrix
to express the pixel coordinates in terms of the inertial coordinates are summarized,
This section derives theinteraction matrix orimage Jacobian matrix that plays a critical role in the development of control strategies that use camera measurements for feedback. The interaction matrix relates the derivatives of the focal plane coordinates to the velocity and angular velocity of the camera frame in theinertial frame.
The following theorem specifies an explicit representation of the interaction matrix
when the focal length is equal to 1.
Equation (7.9) shows that the matrix
is a
matrix for a single feature point. In general, Theorem 7.1 will be employed for several feature points. Suppose there are feature points
. Introduce the
vector
and the
vector
that are obtained by stacking the image plane coordinates
and by stacking the camera coordinates
for all the feature points
as
Thesystem interaction matrix is obtained by applying Theorem 7.1 for each feature point
,
or more concisely as,
When we stack these equations, we obtain the system interaction matrix
or
Just as the system interaction matrix
is defined, an equation is also needed for the system of
feature points that is analogous to Equation (7.12). For
Stacking these equations for
results in
or more succinctly,
With the construction of the matrices
and
in Equations (7.17) and (7.19) for a system that includes feature points
that are fixed in the inertial 0 frame, it is straightforward to pose and solve a number of standard problems in the control of robotic systems using camera based measurements. This section will discuss one such control problem, theimage based visual servo (IBVS) control problem.
The IBVS control problem is a specific example of atracking control problem. First, the IBVS control problem will be defined to specify the goals of the strategy and the measurements that will be used for feedback control.
The derivation of the visual servo control strategy to solve the problem statement in Definition 7.2 is not difficult given the derivation of the matrices
and
. Since the tracking error should approach zero asymptotically, the control law can be defined so that the closed loop system has a tracking error that satisfies the equation
where
is some positive scalar. The solution of Equation (7.23) is given in terms of exponential function
Consequently, if the closed loop tracking error satisfies Equation (7.23), it will approach zero at an exponential rate. The definition of the tracking error can be combined with Equation (7.23) to obtain
Ideally, Equation (7.24) would be uniquely solvable for the velocity
and angular velocity
. However, the system interaction matrix is not square in general since
and it may be that
. Depending on the number of system feature points, the matrix equation
can be underdetermined, overdetermined or exactly determined. If the matrix
has fewer rows than columns,
, the system is said to beunderdetermined. If the matrix
has more rows than columns,
, the system is said to beoverdetermined. If the matrix
has equal numbers of rows and columns,
, the system is said to beexactly determined.
Even if the system is not exactly determined, it is possible to obtain an expression for the control input
by using thepseudo inverse
When both sides of Equation (7.25) are multiplied by
, an expression for the control input vector is obtained as
Before proceeding to the discussion of the closed loop system and its stability, it is important to make several observations about the construction thus far.
This implies that the expression in Equation (7.25) is a nonlinear equation.
The following theorem derives the set of coupled, nonlinear ordinary differential equations that characterizes the dynamics of the closed loop system associated with the feedback control law embodied in Equation (7.25).
The actual simulation of the collection of nonlinear ordinary differential equations written in the form
, where the vector of state variables
is given by
can use any of a number of standardnumerical integration methods for systems of ordinary differential equations. These numerical methods include the family of linear multistep methods. The linear multistep methods include many popular predictor corrector methods, such as the Adams–Bashforth–Moulton method. Another popular family of numerical integration methods include the self‐starting Runge–Kutta methods. The reader is referred to [2] for a discussion of these methods, as well as other popular alternatives.
To use such a numerical algorithm to obtain an approximate solution of these equations, it is necessary to specify theinitial condition
To describe a general procedure to solve for the initial condition
, the initial and final orientation of the camera frame will need to be described. The basis for the initial camera frame
will be denoted by
, and the basis for the final camera frame
will be denoted by
. The position of a point
relative to the camera frame in its initial configuration is then
and the position of the point
relative to the camera frame in its final configuration is
The following steps can be used to calculate the initial condition
.
This equation can also be written more compactly as
Once these steps have been completed, the initial condition
is constructed by stacking the vectors
for
to form the vectors
,
and subsequently assembling
Chapter describes several techniques for deriving feedback controllers for fully actuated robotic systems. All of these control strategies are derived so that the generalized coordinates
and their derivatives
approach a desired trajectory as
,
and
. For setpoint controllers the desired trajectory is a constant trajectory for which
, while trajectory tracking controllers consider time varying desired trajectories. Since all of the formulations in this book have selected the generalized coordinates to be either joint angles or joint displacements, the techniques derived in Chapter are sometimes referred to as methods ofjoint space control. That is, the performance criteria or objective of the controllers discussed in Chapter is the minimization of an error expressed explicitly in terms of the joint degrees of freedom.
Frequently the goal or objective to be achieved via feedback control is naturally expressed in terms of variables associated with the problem at hand, but are not easily written in terms of the joint variables. For example, designing a controller that locates the tool or end effector at some position in the workspace. In this case suppose that the position
of the tip of the tool is given by the vector
The control strategy should guarantee that
,
,
as
. However, the coordinates
are not the joint variables for the robotic manipulator, and the strategies discussed in Chapter are not directly applicable. In principle, it is usually possible to reformulate the equations of motion in terms of the task space variables, but this can be a tedious problem for a realistic robotic system. The set of variables
are an example of a collection of task space variables for the problem at hand. Fortunately, there are many techniques that can be used to derive task space controllers. One approach employs the task space Jacobian matrix and is summarized in the following theorem.
The previous section discussed how the inner loop–outer loop architecture of the computed torque control strategy introduced in Section 6.6 of Chapter can be used to derive tracking or setpoint controllers in task space variables. This section will illustrate that this approach can be used to design controllers based on typical camera observations. The design of controllers based on task space coordinates associated with camera measurements yields stability and convergence results that are more realistic than those described in Section 7.3 of this chapter. Recall that Section 7.3 discusses the visual servo control problem where it is assumed that the input to the controller is the vector of velocities and angular velocities of the camera frame. The value of this input vector is selected so that
where
is the tracking error in the image plane of the feature points. However, the velocities and angular velocities of the camera frame are not the output of an actuator; they are response variables. Their values depend on the forces and moments delivered by the actuators. If the input torques and forces can be chosen such that the velocities and angular velocities satisfy Equation (7.40) exactly, then the discussion of stability and convergence in Section 7.3 applies. In practice, it is not possible to achieve the exact specification of the velocities and angular velocities in Equation (7.40).
Figure 7.50 PUMA robot with attached camera.
For the following two problems let the PUMA robot be configured with
(a) Suppose that
is the rotation matrix in which the
frame is rotated by
about the
axis, and that
Which points in the world are mapped to the point
in the canonical focal plane?
(b) Suppose that
is the rotation matrix in which the
frame is rotated by
about the
axis, and that
To what point
in the canonical image plane is the point
having inertial coordinates
mapped?
Figure 7.51 Spherical wrist with mounted camera
Suppose that the location and orientation of the camera relative to the 3 frame of the spherical wrist is given by the constant homogenous transformation
,
(a) Suppose that
is the rotation matrix parametrized by the 3‐2‐1 Euler angles that is used with
to map frame 0 into the
frame, where the yaw
, pitch
, and roll
are
Suppose further that
m. Which points in the are mapped to the point
in the canonical retinal plane?
(b) Suppose that
is the rotation matrix parametrized by the 3‐2‐1 Euler angles used to map frame 0 into the
frame, where the yaw
, pitch
and roll
are
Suppose further that
. To what point
in the canonical retinal plane is the point
having inertial coordinates
mapped?
Figure 7.55 SCARA robot and task space trajectory tracking.