Chapter 6
Control of Robotic Systems

Chapters (2), (3), (4) and (5) have described a large collection of tools that can be used to study the kinematics and derive the equations of motion of robotic systems. This chapter poses and solves several classical problems that arise in the control of robotic systems. This is a vast topic, and one with a long history. An overview of some of the most commonly encountered robotic control problems is presented. This chapter derives joint space, full state feedback control strategies, while Chapter discusses task space feedback control methods. The latter class is well suited to applications in vision based control of robots. Upon completion of this chapter, the student should be able to:

  • Define the essential ingredients of a control problem for a robotic system.
  • State various definitions of stability and apply them to robotic systems.
  • State Lyapunov's direct method and apply it to study the stability of robots.
  • Formulate computed torque or inverse dynamics controllers for robotic systems.
  • Discuss the structure of inner and outer loop controllers for robotic systems.
  • Formulate controllers based on passivity principles for robotic systems.

6.1 The Structure of Control Problems

Many of the common robotic systems encountered in this book have been shown to be governed by equations that have the form

(6.1) equation

where images is an images ‐vector of generalized coordinates, images is the images generalized mass or inertia matrix, images is the images ‐vector of nonlinear functions of the generalized coordinates and their derivatives, and images is an images ‐vector of actuation torques or forces. If images is invertible, it always possible to rewrite these second order governing equations as a system of first order ordinary differential equations. First define

equation

The resultant governing equations can then be written as

(6.2) equation

This system is the desired set of first order nonlinear ordinary differential equations subject to the initial condition images at the time images .

Control methods for robotic systems will be studied using both the second order form in Equation (6.1) and the first order form in Equation (6.2). The first order form is particularly useful in the analysis ofstability, a topic covered in Section 6.2. The expression of the equations of motion in second order form is convenient in deriving some specific control laws. The discussions ofcomputed torque controllers in Section 6.6 or controllers based onpassivity principles in Section 6.8 are based on Equation (6.1).

The ultimate goal in Chapters and was to derive the governing equations of motion in the general form shown in Equations (6.1) or (6.2). In applications numerical or analytical methods are used to solve for the trajectory of the generalized coordinates images for images , given a prescribed set of input functions images for images . Completion of this task solves the classicalforward dynamics problem of robotics. Problems of control theory seek the solution to a different problem: given some desired goal, can the input images be chosen for images so that the system achieves that goal? Many different control problems have been studied over the years. Control strategies are usually categorized depending on (1) the goal the control strategy seeks to attain, and (2) the method used to reach that goal. Common goals include disturbance rejection, error minimization, trajectory tracking, or system stabilization.

6.1.1 Setpoint and Tracking Feedback Control Problems

Two types of goals will be considered in this chapter. The first type isposition control orsetpoint control which seeks to drive the robotic system to a desired state. The goal in setpoint control is to find the actuation input images for images such that the state approaches some fixed, desired state

(6.3) equation

as images . A typical problem in setpoint control might seek to find the controls images that position and orient the end effector of a robotic arm in some prescribed configuration in the workspace. The second type of control problem studied in this chapter is that oftracking control. The goal of a trajectory tracking controller is to find a control input images such that

equation

as images . The mapping images is a vector of desired, time varying trajectories. As the name suggests, a problem in tracking control might seek to find the control inputs that steer a radar antenna or camera so that it always points at some moving target. It is possible to view a setpoint control law as a special case of a tracking control law. However, it can be easier to state conditions that guarantee that the setpoint control objective is achieved, and for this reason these two problems are studied independently.

6.1.2 Open Loop and Closed Loop Control

In addition to the goal that defines a particular control strategy, the means for achieving that goal differentiates control techniques. One of the most fundamental differences among control strategies distinguishes betweenopen loop control andclosed loop control methods. This distinction is based on the structure of the control input images . An open loop control method is one that chooses the control input images to be some explicit function of the time images alone. If, on the other hand, the actuation input images is given as some function of the states images and perhaps time images ,

equation

the vector images defines a (full state) closed loop control or feedback control strategy. Feedback controllers have many desirable properties. Two important reasons that they are are attractive include the fact that they are amenable to real time implementations using measurements of output, and they reduce the sensitivity of systems to disturbances. This book will only study full state feedback controllers. Sections 6.6, 6.7, and 6.8 discuss several approaches for deriving setpoint or tracking feedback controllers.

6.1.3 Linear and Nonlinear Control

It has been emphasized in this book that the governing equations for most robotic systems are nonlinear: it is an unusual case when they happen to be linear. Chapter showed that Newton–Euler formulations can yield systems of nonlinear ordinary differential equations (ODEs) or differential‐algebraic equations (DAEs). Chapter demonstrated that Hamilton's principle or Lagrange's equations also yield systems of nonlinear ODEs or DAEs. In most undergraduate curricula, the first, and often only, discussion of control theory is restricted tolinear systems. A powerful and comprehensive theory of linear control theory has been developed over the past several decades. The focus onlinear control theory during undergraduate program is justified: it enables the study of linear ODEs that arise in numerous problems from applications in mechanical design, heat transfer, electrical circuits, and fluid flow.

The development of control strategies fornonlinear systems, such as those studied in robotics, is significantly more difficult than that for linear systems. One source of trouble is the fact that the study of the stability is much more complicated for nonlinear systems than linear systems. In addition, the underlying structure of linear control systems is easier to describe than that for nonlinear systems. Each of these issues will be discussed briefly.

The concepts of stability andasymptotic stability (introduced in Definitions 6.2 and 6.3) for general nonlinear systems are local definitions. This means that the assurances that trajectories that start close to an equilibrium remain nearby for all time are guaranteed to be true only when the initial conditions reside in some neighborhood of the equilibrium under consideration. It can be the case that the neighborhood in which the stability guarantees hold is a very small set. If the initial conditions are too far from the equilibrium, and are outside this neighborhood, the guarantee of stability does not hold. In contrast, for linear systems, the neighborhood of the equilibrium is always the whole space. This fact means that for linear systems, local stability impliesglobal stability. It can be a formidable task to prove that a nonlinear system satisfies conditions of global stability. When designing controllers, assurances of global stability and convergence are most desired.

Discussions of stability, including the considerations above, are central in the synthesis of control strategies. The question of whether a system can be rendered stable through the introduction of feedback control is made rigorous via the definition ofstabilizability of dynamic systems. In addition to stabilizability, there are other qualitative properties of dynamic systems that have been defined that are essential to understanding the feasibility of certain control design tasks. For example, the definition ofcontrollability makes clear when it is possible to drive or steer a system to certain configurations. The definition ofobservability describes the ability to reconstruct the state from a specific set of system observations or measurements. There is a rich theory that has been developed for linear systems that provides practical means for determining stabilizability, controllability, and observability. These techniques can be applied to many realistic problems and are now standard tools available in control synthesis software. For certain smooth nonlinear systems the corresponding notions of stabilizability, controllability, and observability have been defined, but the application of these principles to a specific nonlinear system can be exceptionally difficult. The interested student can consult [21] for a detailed discussion.

For the reasons above, the derivation of a control strategy for a nonlinear system, such as a typical robotic system, can be significantly more challenging than that for a linear system.

Fortunately, the structure of the governing equations for many robotic systems is such that it is often possible to define a feedback control law that transforms the set ofnonlinear ODEs into a system oflinear ODEs. This is possible, as will be shown, for robotic manipulators that constitute a kinematic or serial chain that is ground based. It is important to realize that this transformation, which chooses a feedback control to change or modify a set of governing nonlinear ODEs into a system of linear ODEs, cannot be carried out for an arbitrary nonlinear system. It is the special form of certain robotic systems that makes this approach feasible. The question of when this strategy is possible for general systems is studied systematically in the control theory community as the problem offeedback linearization. A good treatment of the problem can be found in [21]. This approach is also known as themethod of dynamic inversion or as computed torque control in the robotics literature. Overviews of these approaches for robotic systems are found in [15] and [30]. Most descriptions of feedback linearization cast the theory in terms of systems of first order ODEs, while descriptions of computed torque control or dynamic inversion retain the structure of second order ODEs that appear directly in either Newton–Euler or analytical mechanic formulations of dynamics. This observation makes it possible to view the approaches derived within the context of dynamic inversion or computed torque control as special cases of the theory of feedback linearization.

6.2 Fundamentals of Stability Theory

This chapter discusses the fundamentals of how to construct and analyze methods of feedback control for robotic systems. The single most important requirement of any control strategy is that the dynamic system that results from the use of the control law is stable. While the relative merit of different control strategies can be quantified via different measures of performance, any viable control technique must yield a stable system.

Stability theory can be framed using different levels of abstraction, as well as under various operating hypotheses. Stability in metric spaces is studied in [40], for example, whereas a stochastic framework is employed in [32]. The stability of infinite dimensional systems is considered in [13], and finite dimensional systems are studied in [19]. Popular treatments that develop stability theory as it is applied to the control of systems of ordinary differential equations can be found in [28, 42], or [44]. These last three references provide a good background for the material in this section, as well as additional advanced material of general interest to robotics control. Finally, the textbooks [3,30] discuss how the general techniques in stability theory can be tailored to specific classes of robotic systems. The discussion of stability here begins by introducing a few background definitions.

It is important to note in this definition that a motion or trajectory associated with thenon‐autonomous system in Equation 6.2 depends parametrically on the initial time images and initial condition images . Sometimes this dependency is emphasized by writing

equation

An equilibriumimages of the system, being a constant trajectory that does not depend on time, must satisfy the equations images and images . That is, the trajectory associated with an equilibrium starts at images and remains at images for all time images . The following theorem makes precise the notion of stability for an equilibrium: it takes the form of a standard images proof.

Definition 6.2 imposes requirements only in some neighborhood of the equilibrium under consideration, and for this reason it is sometimes said to define thelocal stability of an equilibrium. If the radius images in the definition can be selected to be arbitrarily large, the equilibrium is said to beglobally stable. In this book, any discussion of stability may be assumed to be a discussion of local stability, and any discussion of global stability will be explicitly labeled as such.

Figure 6.1 illustrates a graphical interpretation of the definition of stability. As shown in the figure, the equilibrium images can be visualized as a trajectory that starts at the fixed point images in the space of initial conditions and is extended as a constant function for all time images . The parameter images defines a tube of radius images centered about the constant trajectory images that is extruded along the time axis. The system is stable if for any images it is possible to find a disk of radius images about the initial condition images such that any trajectory starting in the disk remains inside the tube of radius images for all time. It follows that if the neighborhood of the equilibrium is chosen to be small enough, all trajectories starting in the neighborhood remain bounded for all time.

Graphic representation of stability.

Figure 6.1 Graphic representation of stability.

The application of this definition becomes more clear in the following example.

The analysis carried out in Example 6.1 is typical of the reasoning employed in studying the stability of a nonlinear system. In particular, the example shows that the concept of stability is associated with a specific trajectory or equilibrium. The ODE representing the robot in Example 6.1 has an infinite number of stable equilibria, as well as an infinite number of unstable equilibria. If the state space of the dynamical system representing the robot is selected to be a manifold, then there are a finite number of equilibria. See 10. The following definition introduces two stronger forms of stability,asymptotic stability andexponential stability. These notions of stability are used in the design of both setpoint and tracking controllers and play an important role in this discussion.

Note that the definitions of asymptotic and exponential stability of equilibria require that they be stable. Figure 6.3 depicts the trajectories of a system that satisfies images for any initial condition. That is, all trajectories are attracted to the equilibrium at the origin. However, the equilibrium at the origin is not stable, hence it is not asymptotically stable. It should be emphasized that Definition 6.3 requires that an asymptotically equilibrium must be stable and attractive: attractivity alone is not enough.

A dynamical system that attracts to the origin, but is not stable at the origin.

Figure 6.3 A dynamical system is attracted to the origin, but is not stable at the origin.

A stable equilibrium is asymptotically stable if a disk of radius images can be found for which trajectories that start at any initial condition in that disk converge to the equilibrium as images . An equilibrium is exponentially stable if it is asymptotically stable and converges to the equilibrium at an exponential rate. Again, it is emphasized that Definition 6.2 imposes conditions only in neighborhood of radius images about the equilibrium, and for this it is sometimes said that the equilibrium islocally asymptotically stable. If the radius images can be chosen to be arbitrarily large, the equilibrium is said to beglobally asymptotically stable. In this book an asymptotically stable equilibrium is understood to mean a locally asymptotically stable, and globally asymptotically stable equilibria are explicitly labeled as such.

The next example shows that these two types of stability appear naturally in typical control problems.

6.3 Advanced Techniques of Stability Theory

The previous section introduced the definitions of stability, asymptotic stability, and exponential stability. These definitions were applied directly in Examples 6.1 and 6.2 to study a simple robotic system. In Example 6.2, the stability of the equilibrium at the origin was studied by explicitly solving for the solution of the closed loop governing equations. In Example 6.1, the equation of motion was multiplied by images and integrated in time to find a conserved quantity. The critical step wrote the time derivative of the the total mechanical energy images in the form

equation

which was integrated in time to yield

(6.8) equation

The study of the stability of different equilibria is straightforward using this conserved or invariant quantity.

The study of the stability of control methods for realistic robotic systems can be sufficiently difficult that it is not feasible to solve analytically the closed loop governing equations. It therefore is not practical, or seldom even possible, to use the explicit analytical solution to design and study a feedback control law.

Fortunately, the strategy in which conserved quantities such as energy are identified to study stability can be generalized and applied to many practical robotic systems. These generalizations of energy principles are applied by invoking Lyapunov's direct method. It is now a standard practice in the study of robotic systems to use variants of Lyapunov's direct method to analyze their stability. The study of the finer points of this approach to stability theory extends beyond the scope of this book. This book will introduce only those definitions and theorems that find the most frequent use in applications; no proofs of the underlying theorems are given. The texts [44] or [28] can be consulted for the proofs and for an expanded discussion of Lyapunov theory as it is applied to systems of ODEs. It is also worth observing that this framework has been extended to broader classes of abstract dynamic systems. A good overview can be found in [40].

6.4 Lyapunov's Direct Method

The theorems of Lyapunov's direct method introduceLyapunov functions, which constitute the principal tools for the study of stability. The discussion begins by defining useful ways of describing the growth, or decay, of functions.

Examples of class images functions are readily available. They are positive functions that pass through the origin and are non‐decreasing. The functions images or images are class images functions, as are images for images . Some of these functions are depicted in Figure 6.4.

Examples of class K functions, f(x)=xp for 0ltltltpltltlt∞.

Figure 6.4 Examples of class images functions, images for images .

The collection of class images functions are used to define notions of positivity and negativity that are suitable for the study of stability. It will be shown that stability and asymptotic stability are guaranteed if a Lyapunov function images can be identified that is(locally) positive definite and whose time derivative images is(locally) negative definite. The definitions below establish what is meant when a Lyapunov function is said to be positive and its derivative negative.

With this definition, one of the most common forms of Lyapunov stability theorems may be stated. This theorem will be the principal tool used in this book for the study of stability of robotic systems.

It should be noted the above theorem is stated for an equilibrium located at the origin. This is not a restriction in practice. Analysis of non‐zero equilibria begins with a change of variable to shift the equilibrium and define a new set of equations as required in Theorem 6.1.

In order to enforceglobal asymptotic stability (i.e. images ), an additional condition on the Lyapunov function of radial unboundedness must be enforced. As the norm of the state approaches infinity, the Lyapunov function must also approach infinity. Additional details may be found in [28].

A function images that has the properties noted in Theorem 6.1 is aLyapunov function. The most difficult task in using Lyapunov's direct method can be in determining a candidate Lyapunov function. Fortunately, for many robotic systems, there are often good candidates. Researchers have derived, categorized and documented examples of Lyapunov functions for many classes of robotic systems. The reader can see [15] or [30] for examples. Many Lyapunov functions can be derived from, or related to, conserved or energy‐like quantities. The following example is typical in that the Lyapunov function is chosen to be the total mechanical energy.

6.5 The Invariance Principle

The definitions and theorems introduced in the last section that constitute Lyapunov's direct method can be applied to many robotic systems. Several examples throughout this chapter will show that they can be employed directly for the derivation and study of control methods. For example, when seeking to design a controller for a robotic system, it is often desirable to establish some form of asymptotic stability. In the example of setpoint control, a controller is desired such that the joint variables and their derivatives approach some desired constant value as images ,

equation

This condition might correspond to the task of positioning the end effector of a kinematic chain at a prescribed location and orientation. The problem of tracking control seeks a feedback function that causes the joint variables or their derivatives to track some desired trajectories as images ,

equation

In both of these problems, the goal of the control strategy can be cast as a requirement in terms of the asymptotic stability of the error images and its derivative images ; namely, that

equation

as images in both cases.

The technique for establishing asymptotic stability via Lyapunov's direct method requires that the Lyapunov function images is positive, and that the derivative images is negative, for all states in some neighborhood of the equilibrium under consideration. Moreover, the most useful control designs would reach such conclusions for all states, and not just in some neighborhood of the equilibrium. It is always preferable to obtain global, as opposed to local, guarantees of stability in control design. It is usually not difficult to guarantee that images is locally positive definite. Energy or energy‐like quantities often exist for robotic systems that are positive and non‐decreasing, and these are often used in constructing Lyapunov functions for the system. However, it is frequently the case that images isnegative semi‐definite, with

equation

for all images , but not locally negative definite. In such cases Lyapunov's direct method only guarantees stability of an equilibrium and makes no claim regarding asymptotic stability. The following example is typical of this situation and is a common occurrence in realistic applications.

There are various techniques to overcome the difficulties associated with a Lyapunov function that has a derivative that is negative semi‐definite but not negative definite. The most popular method is based on LaSalle's invariance principle, which requires the definitions of apositive invariant set, aweakly invariant set, and aninvariant set.

Definition 6.6 stipulates that a set images is positive invariant if every trajectory that starts in images stays in images for all time images . The set images where images is an equilibrium is one example of an positive invariant set. A set images is weakly invariant if for each images there is a trajectory images (defined over all time images ) that passes through images . A set images is invariant with respect to the system in Equation (6.9) if both images and the complement of images are positive invariant. See Figure 6.6 for a graphic interpretation of positive invariance, weak invariance, and invariance.

No alt text required

Figure 6.6 Graphical interpretation of: (a) positive invariance: images for all images and images . (b) Weak invariance: images is positive invariant and for each images , there exists a trajectory defined for images that passes through images . (c) Invariance: images and images are positive invariant.

It is important to observe that LaSalle's theorem holds for systems of autonomous ODEs; ones that do not depend explicitly on time. The conclusion of LaSalle's principle is that trajectories are attracted to the largest weakly invariant subset contained in

equation

Suppose that images is the largest weakly invariant subset contained in images . A trajectory is attracted to images provided

equation

as images where images is the distance from images to the set images

equation

While the conclusions of LaSalle's principle is of interest in its own right, it is usually employed in control applications by showing that the largest invariant subset images contains a single element images . Then the invariance principle implies that

equation

The conclusion is precisely what is required to show that a stable equilibrium is in fact asymptotically stable. Example 6.5 applies this strategy in a typical robotics problem.

6.6 Dynamic Inversion or Computed Torque Control

Chapters and showed that the governing equations for a natural system can be written in the form

(6.11) equation

where images is an images ‐vector of generalized coordinates, images is an images generalized mass or inertia matrix, images is an images ‐vector of nonlinear contributions to the equations of motion, and images is an images ‐vector of actuation torques or forces. The vector images contains contributions from the potential energy images of the system as well as the Coriolis and centripetal terms. The vector images has been shown in Chapter to have the specific structure

(6.12) equation

where images is an images generalized damping matrix. Since the generalized inertia matrix images is symmetric and positive definite, it is always invertible. Therefore, it is possible to solve for the vector of second derivatives

equation

Thecomputed torque control law selects the actuation torques to be

(6.13) equation

where images is a new, as of yet undetermined, control input vector. With this choice of the actuation torques, the governing equations become

(6.14) equation

Equation (6.14) is a linear system that has been obtained from the nonlinear Equations (6.11). A nonlinear control law is defined in Equation (6.13) that transforms the system of governing nonlinear ODEs in Equation (6.11) into a system of linear ODEs in Equation (6.14). All of the rich theory that has been developed in linear control theory can now be brought to bear on the system in Equation (6.14). There is a wide collection of control functions images that can be selected in Equation (6.14) that yield specific desirable behavior in the unknown generalized coordinates images . The next theorem discusses one popular feedback strategy that achieves tracking control. The control images is selected so that the generalized coordinates and their derivatives asymptotically track some desired variables images and their derivatives images .

If the physical system is instrumented so that images and images can be measured, it is possible to calculate the actuator forces and moments in images via Equation (6.13) in the implementation of the closed loop feedback control law. It is for this reason that this feedback equation is called the computed torque control law. The choice of control in Equation (6.13) is also said to be derived fromdynamic inversion since the control input images can be interpreted as the solution of a classical inverse dynamics problem. This description will be explained.

Illustration of spherical robot center of mass offsets.

Figure 6.9 Spherical robot center of mass offsets.

The forward dynamics problem associated with the governing equations in Equation (6.11) is the same problem studied in Chapters and . In this problem the actuation forces images are given, and Equation (6.11) is solved for the second derivatives of the generalized coordinates images . In other words, the problem of forward dynamics uses Equation (6.11) to define a mapping from inputs to outputs

equation

If the control in Equation (6.13) is chosen, images is obtained. When the result is substituted into Equation (6.11),

(6.19) equation

Now Equation (6.19) is solved for the actuation torques from the vector images . The inverse mapping is defined consequently as

equation

The computed torque control is also known as the control determined by dynamic inversion.

This methodology admits an implementation of the control strategy in terms of a well known architecture. The architecture combines a nonlinear compensator and outer loop controller in a natural way. The structure of the overall system is depicted in Figure 6.7. In this figure the governing equations of the robot are embodied in the block labeled “Robotics EOM”. The input to this block is the actuation inputs images and the outputs of the block are the generalized coordinates images and their derivatives images . The nonlinear compensator in the block labeled “dynamic inversion” calculates actuator inputs given the input images using Equation (6.19). This nonlinear transformation is achieved via the solution of the inverse dynamics problem. Finally, the outer loop controller takes as feedback the desired trajectories images and their derivatives images and the generalized coordinates images and their derivatives images , and from these calculates the control input images . In Theorem 6.4 the outer loop controller calculates images from Equation (6.15). This calculation is carried out in terms of a linear gain matrix acting on the tracking and tracking rate error, images . In general, this matrix multiplication can be replaced by a suitable transfer function to implement more general classes of outer loop control.

Architecture of computed torque control.

Figure 6.7 Architecture of computed torque control.

6.7 Approximate Dynamic Inversion and Uncertainty

Section 6.6 presented the fundamentals of dynamic inversion or computed torque control. This approach to the control of robotic systems is one of the most popular starting points for control design. The methodology is applicable to a reasonably large collection of systems, and can also be used to motivate and understand alternative control schemes. Still, one notable drawback of this approach is that the exact cancellation of unwanted terms requires knowledge of the explicit form of the nonlinearities that appear in the governing equations. Since these nonlinear terms depend parametrically on the link mass, moments of inertia, and products of inertia, the approach also requires exact knowledge of these constants. In practice, the exact values for these constants are not known and the introduction of acomputed control torque never achieves the desired exact cancellation. Just as importantly, some applied forces to which the robotic system is subject are very difficult to determine in principle or in practice. Friction, dissipative forces and moments, and nonlinear effects like backlash and hysteresis fall into this category.

The lack of knowledge of the exact form of the nonlinearities is one reason that the analysis described in Section 6.6 for the computed control torques is at best an idealization. There is always some mismatch in practice among the terms to be canceled using a computed torque control law. This section will extend the analysis and allow for the possibility that the actuation vector in Equation (6.13) is only approximately equal to the computed torque control. Suppose now that the control input is given by

(6.23) equation

where images and images are approximations to the generalized mass or inertia matrix images and nonlinear term images , respectively, in Equation (6.13). The form of the governing equations will also be generalized; suppose that the equations of motion for the robot are

(6.24) equation

and that images denotes an unknown disturbance torque. The governing equations can be rewritten using the controller (6.23) derived from approximate dynamic inversion as

(6.25) equation

Equation (6.25) can be written in the compact form images , where images is a measure of the mismatch in exact cancellation

equation

and images and images measure the approximation error for the generalized inertia matrix and nonlinear vector images and images . Equations (6.25) reduce to the form achieved via exact cancellation in Equation (6.14) when images , images and images , as expected.

Many controllers can be derived using the framework described above. One frequently used controller cancels the gravity terms only, approximates the generalized inertia matrix as the identity, and uses proportional‐derivative (PD) control for the outer loop. This controller is able to drive the robot so that it approaches a desired final constant pose as images . This is another simple example of setpoint control.

Theorem 6.3 shows that the method of approximate dynamic inversion can be used to achieve setpoint control when only the nonlinear term due to images is canceled and a PD controller is employed in the outer loop. Many other controllers based on approximate dynamic inversion appear in the literature. The next example provides one technique for achieving a stabilizing controller in the presence of disturbances and uncertainty using a discontinuous control law. This control law is written in a simple form, one that emphasizes that an asymptotically stable response is achieved if the unknown uncertainty images is small enough. Practical versions of the theorem, that establish a priori bounds on the uncertainty images , can be found in the literature. See for example [15].

Theorem 6.5 shows that one means of constructing a tracking controller that accommodates uncertainty in the system dynamics is to introduce the switching feedback signal images defined in Equation (6.28). Such a controller achieves good performance, in principle. If the disturbance or unknown dynamics that gives rise to the mismatch images satisfies images for all images , then the tracking error and its derivative converge to zero asymptotically. Still, there are a couple of troublesome issues with such a “hard” switching controller. One difficulty is theoretical in nature. As mentioned earlier, when the right hand side of the governing system of ordinary differential equations is discontinuous, the rigorous justification for the Lyapunov stability argument must be expressed in terms of generalized solutions. Such an analysis is beyond the scope of this text. See [3,13,30] for a discussion of the details in this case. In addition to such theoretical considerations, there are practical reasons why the “hard” switching controller can be problematic in applications. It is common that this controller exhibits high frequency oscillation as the actuation varies. Moreover, it is not simple to predict when a particular system will exhibit such a pathological response regime because the closed loop system is nonlinear.

The following theorem illustrates that it is possible to address some of these problems by using a “smoothed” switching control. The smoothed switching controller introduces another parameter images that defines a region over which the control input varies in a smooth way between the large output amplitudes of the “hard” switching controller. The right hand side of the resulting closed loop system is continuous, and the system can be described by a conventional, continuous Lyapunov function. As a result, more esoteric notions of generalized solutions are not required for the analysis or interpretation of this control law. In practical terms, the introduction of the parameter images gives an explicit bound on the size of the set over which the switching control varies in amplitude. It is possible as a consequence to eliminate the high frequency oscillation associated with chattering control input signals.

Visualization of uniform ultimate boundedness.

Figure 6.15 Visualization of uniform ultimate boundedness.

The next two examples study controllers based on approximate dynamic inversion for typical robotic systems.

6.8 Controllers Based on Passivity

The derivation of controllers based on approximate dynamic inversion leads to a family of practical control strategies, ones that are designed for a number of different performance metrics. Another class of popular controllers have been derived that rely on theskew symmetry of the matrix images and associatedpassivity properties of the governing equations. It has been shown in Chapter that the governing equations for anatural system can be written in the form

(6.35) equation

For this equation of motion the matrix images is skew symmetric. That is, the identity

equation

holds for any vector images . As before, define the tracking error to be images .

The construction of controllers based on passivity can be carried out by introducing the filtered tracking error images and the auxiliary variables images and images as

(6.36) equation
(6.37) equation
(6.38) equation

where images is a positive diagonal matrix. The controller based on passivity chooses the control input

(6.39) equation

where images is a positive, diagonal gain matrix. With this choice of control input images , the closed loop system dynamics are governed by the equations

equation

or

equation

6.9 Actuator Models

Modern robotic systems, including the diverse array described in Chapter , utilize a wide array of actuators. The actuators may include conventional electric motors, hydraulic cylinders, or pneumatic pistons. These common actuation systems have numerous variants and individual designs, each having specific advantages and drawbacks. They may require models with tailored governing equations to represent the physics of their operation. Each may exhibit its own characteristic nonlinearities, which must be accounted for in modeling and control synthesis. Moreover, ever increasing numbers of novel and unconventional actuators appear in robotics applications each year. These include actuators based on shape memory alloys, biomaterials, electrochemicals, electrostructural materials, and magnetostructural materials. The research into alternative systems is driven by the need for more compact, lightweight, high authority, and high bandwidth actuation systems.

6.9.1 Electric Motors

Of all the possible actuation devices, the electric motor is the most common in robotic systems. Nearly all electric motors operate based on the principles ofelectromagnetic induction whereby a current carrying wire immersed in a magnetic field undergoes a force. Electric motors operate by passing a current through loops of wire that are aligned relative to an external magnetic field so that the force turns the rotor. Electric motors are popular owing to their relative simplicity, fast response, and large startup torque output. There are many different types of electric motors including direct current (DC), induction, synchronous, brushless, and stepper motors. This section focuses on the fundamentals of electric motors and stresses those aspects of electric motor architecture that are common to many electric motors.

This section presents the physical foundations and derives the governing equations for apermanent magnet DC motor shown schematically in Figure 6.21. A DC motor works by virtue of the Lorentz force law. This law can be used to show that the force images acting on a conductor of length images that carries current images in the magnetic field having magnetic flux images is given by images , where images is a unit vector along the length of the wire. Consider a loop of wire that rotates in the magnetic field having magnetic flux images along the unit vector images as shown in Figure 6.22. In this case the force on the wire from images to images is given by

equation

A similar calculation shows that the force acting on the wire from images to images is images . The net torque applied on the wire loop in the configuration shown is therefore

equation

This torque will cause the loop to rotate counterclockwise about the images axis until the loop passes the images images plane.

Structure of permanent magnet DC motor.

Figure 6.21 Permanent magnet DC motor.

Illustration of loop of wire carrying a current in the magnetic field having flux B.

Figure 6.22 Loop of wire carrying a current in the magnetic field having flux images .

If the loop rotates and passes the vertical plane in Figure 6.22, the sign of the moment about the images axis changes. A DC motor as depicted in Figure 6.21 utilizes acommutator to avoid the reversal in sign of the torque generated in this example of a rotating loop of wire. The primary components of the DC motor depicted in Figure 6.21 include astator that contains the north and south magnetic poles, thearmature that rotates relative to the stator, and the commutator that is fixed to the armature. Thebrushes maintain a sliding contact between the commutator and the power source that drives the motor. The commutator is made of segments that are electrically isolated from each other and are fixed to the rotating armature. The ends of the wire loops are connected to the commutator segments. The commutator segments rotate relative to the stator and maintain contact to the external power supply through the brushes. Practical motors have windings that contain many loops, instead of the single loop shown in Figure 6.21. The resultant torque generated by a winding having images loops is

equation

This expression may be recast in terms of a single torque constant images that collects the electromechanical properties of the motor into a single term, such that

(6.40) equation

Equation (6.40) provides a characterization of how the applied torque varies with the input current, but it does not describe how the current varies as a function of time. Whenever a conductor moves in a magnetic field, a voltage develops across that conductor. This induced potential difference is theback electromotive force (EMF) voltage. Faraday's law states that the back EMF voltage is equal to the time derivative of themagnetic flux linkage images in the winding

equation

The magnetic flux linkageimages in the winding that contains images turns is

equation

where the magnetic flux linkageimages in a single loop is defined as

equation

In this equation images is the magnetic flux, images is a unit vector perpendicular to the loop of wire, and the integral is carried out over the area enclosed by the wire. The linkage images is computed to be

equation

where images is the angle between the images axis and the loop of wire (images for the example under consideration). The voltage that develops across the ends of the winding is consequently

equation

The constant images is the back EMF constant of the electric motor. Kirchoff's voltage law can now be applied around the circuit formed by the power supply, brushes, commutator and armature windings to show that

equation

where images is the armature inductance and images is the armature resistance. These results in the following theorem.

The analysis in Example 6.11 is based on the application of the Newton–Euler equations for individual bodies that make up the robotic system and the application of Kirchoff's voltage law for the electrical circuit. This approach can be used to study any robotic system. Often it can be advantageous to deduce the form of the equations of motion that include actuator physics for the robotic system using principles of analytic mechanics. This strategy simply adds the appropriate additional terms to the kinetic and potential energy that are introduced by the physics of the actuator. In such a strategy it is often possible to derive the equations of motion without an actuator model, and then to add terms subsequently that account for actuator physics. The following example illustrates the utility of this method.

6.9.2 Linear Actuators

This book focuses on constructing robotic systems primarily through the introduction of either revolute or prismatic joints, or a superposition of these joints. The previous section introduced the fundamental underlying principles by which a DC motor converts electrical energy into rotational motion. These systems can be applied directly to drive revolute joints in robotic systems. Actuators that are used commonly to drive prismatic joints include hydraulic cylinders, pneumatic pistons, and electromechanical linear motors. Hydraulic and pneumatic actuation can be attractive in applications that require large loads and stroke. Earth moving machinery, such as an excavator or bulldozer, makes use of hydraulic cylinders. Electrical linear motors are used in applications that require rapid response and portability, and they are common components used for actuation of robotic systems. This section focuses on the class ofelectromechanical linear motors.

An electromechanical linear motor is an actuator that combines a conventional electric motor and a mechanical subsystem to convert the rotational motion of the motor into translational motion. The mechanical subsystem may consist of a screw mechanism or gears, for example. Figure 6.25 illustrates the primary components of a typical electromechanical linear motor. The electric motor rotates the lead screw which translates the linear stage along the guide rails. The drive nut embedded in the linear stage is prevented from rotating by the guide rails of the drive casing as it travels along the lead screw.

Schematic of an electromechanical linear motor.

Figure 6.25 Schematic of an electromechanical linear motor.

6.10 Backstepping Control and Actuator Dynamics

This chapter has discussed several methods for deriving feedback controllers having the structure images for robotic systems that take the form

(6.45) equation

Stability and convergence of the controlled system is based on analysis of a Lyapunov function that is tailored to the form of the governing Equation (6.45) and the specific feedback input images . As noted in the previous section, it is rare for actuation torques or forces to be directly controlled in practice. It is much more common that commanded inputs take the form of voltages or currents that drive motors, which in turn generate forces or moments that act on the robot. It has been shown that one of the general models that includes actuator dynamics arising from DC motors or electromechanical linear actuators consists of a set of coupled and mechanical and electrical subsystems

(6.46) equation
(6.47) equation

where images is the diagonal images inductance matrix, images is the diagonal images resistance matrix, images is the diagonal images back EMF constant matrix, and images is the diagonal images torque constant matrix. Equations (6.46) and (6.47) can also be rewritten in first order form as a pair of equations

(6.48) equation
(6.49) equation

Suppose that an ideal feedback law images could be imposed directly in Equation (6.48). Also suppose that the stability of the motion of images that would result when this ideal feedback is substituted into Equation (6.48) is guaranteed by a Lyapunov function that satisfies

(6.50) equation

However, when the coupled pair of Equations (6.48) and (6.49) are considered together, it is the input images for images that is imposed, and it cannot be guaranteed that the desired control law images holds for each images . Define a new state images as

equation

that measures how closely the evolution of the coupled system comes to satisfying the ideal control law. With the introduction of the new state images , the governing equations in first order form can now be expressed as

equation

By using the assumption that the control law images corresponds to the Lyapunov function images that satisfies the conditions in Equation (6.50), it is possible to define a feedback controller for the coupled pair of equations. Choose the Lyapunov function

equation

When the derivative of the Lyapunov function images is calculated along the trajectories of the coupled pair of equations,

equation

Now suppose images is chosen to be

(6.51) equation

For this case, it follows that

equation

Therefore, the equilibrium at the origin of the coupled dynamics governing images is asymptotically stable. The use ofbackstepping control is shown in the following example.

6.11 Problems for Chapter 6, control of Robotic Systems

6.11.1 Problems on Gravity Compensation and PD Setpoint Control

  1. Problem 6.1 A two degree of freedom PUMA model was derived in Problem 5.21 for the robot depicted in Figures 5.25 and 5.26. The generalized coordinates for this robot are selected to be
    equation

    where images and images are the angles for revolute joints 1 and 2, respectively. The generalized forces are

    equation

    where images and images are the actuation torques that act about the revolute joints 1 and 2, respectively. Let the system parameters for this robot be images kg and images m. Derive the controller that uses PD feedback with gravity compensation as described in Theorem 6.4 for this robot to achieve setpoint control. Write a program to simulate the performance of the controller. Plot the state trajectories, setpoint error, and control inputs as a function of time for various choices of the initial conditions, the target state and the choice of feedback gains.

  2. Problem 6.2 A three degree of freedom model was derived in Problem 5.22 for the PUMA robot depicted in Figures 5.25 and 5.26. The generalized coordinates for this robot are selected to be
    equation

    where images and images are the angles for revolute joints images and 3, respectively. The generalized forces are

    equation

    where images , and images are the actuation torques that act at revolute joints images , and 3, respectively. Let the system parameters for this robot be images m, images m, images m, images m, and images kg. Derive the controller that uses PD feedback with gravity composition as described in Theorem 6.4 for this robot to achieve setpoint control. Write a program to simulate the performance of the controller. Plot the state trajectories, setpoint error, and control inputs as a function of time for various choices of the initial conditions, the target state and the choice of feedback gains.

  3. Problem 6.3 A two degree of freedom model was derived in Problem 5.23 for the PUMA robot depicted in Figures 5.25 and 5.26. The generalized coordinates for this robot are selected to be
    equation

    where images and images are the angles for revolute joints 1 and 2, respectively. The generalized forces are

    equation

    where images and images are the actuation torques that act at the revolute joints 1 and 2, respectively. Let the system parameters for this robot be images kg, images kg images , images kg images , images kg images , images m, and images m. Derive the controller that uses PD feedback with gravity composition as described in Theorem 6.4 for this robot to achieve setpoint control. Write a program to simulate the performance of the controller. Plot the state trajectories, setpoint error, and control inputs as a function of time for various choices of the initial conditions, the target state and the choice of feedback gains.

  4. Problem 6.4 A three degree of freedom model was derived in Problem 5.24 for the Cartesian robot depicted in Figure 5.27. The generalized coordinates for this robot are selected to be
    equation

    where images , and images are the translations in the inertial images directions, respectively. The generalized forces are

    equation

    where images , and images are the actuation forces acting along the prismatic joints along the inertial images , and images directions, respectively. Let the system parameters for this robot be images kg, images kg, and images kg. Derive the controller that uses PD feedback with gravity compensation as described in Theorem 6.4 for this robot to achieve setpoint control. Write a program to simulate the performance of the controller. Plot the state trajectories, setpoint error, and control inputs as a function of time for various choices of the initial conditions, the target state and the choice of feedback gains.

  5. Problem 6.5 A three degree of freedom model was derived in Problem 5.25 for the spherical wrist depicted in Figures 5.28 and 5.29. The generalized coordinate for this robot are selected to be
    equation

    where images and images are the angles for joints images , and 3, respectively. The generalized forces are

    equation

    where images , and images are the actuation torques that act at joints images , and 3, respectively. Let the system parameters for this robot be images kg, images kg, images kg, images kg images , images kg images , images kg images , images kg images , images kg images , images m, and images m. Derive the controller that uses PD feedback with gravity composition as described in Theorem 6.4 for this robot to achieve setpoint control. Write a program to simulate the performance of the controller. Plot the state trajectories, setpoint error, and control inputs as a function of time for various choices of the initial conditions, the target state and the choice of feedback gains.

  6. Problem 6.6 A three degree of freedom model was derived in Problem 5.26 for the SCARA robot depicted in Figures 5.30 and 5.31. The generalized coordinates for this robot are selected to be
    equation

    where images and images are the joint variables for revolute joints 1 and 2, respectively, and images is the joint variable for prismatic joint 3. The generalized forces are

    equation

    where images and images are the actuation moments that drive revolute joints 1 and 2, respectively, and images is the actuation force that drives prismatic joint 3. Let the system parameters for this robot be images kg, images kg, images kg, images m, images m, images m, and images m. Derive the controller that uses PD feedback with gravity composition as described in Theorem 6.4 for this robot to achieve setpoint control. Write a program to simulate the performance of the controller. Plot the state trajectories, setpoint error, and control inputs as a function of time for various choices of the initial conditions, the target state and the choice of feedback gains.

  7. Problem 6.7 A three degree of freedom model was derived in Problem 5.27 for the robot depicted in Figures 5.30 and 5.31. The generalized coordinates for this robot are selected to be
    equation

    where images and images are the joint variables for revolute joints 1 and 2, respectively, and images is the displacement along prismatic joint 3. The generalized forces are

    equation

    where images and images are the joint torques for revolute joints 1 and 2, respectively, and images is the joint force for prismatic joint 3. Let the system parameters for this robot be images kg, images kg, images kg, images m, images m, images kg images , images kg images , images kg images . Derive the controller that uses PD feedback with gravity composition as described in Theorem 6.4 for this robot to achieve setpoint control. Write a program to simulate the performance of the controller. Plot the state trajectories, setpoint error, and control inputs as a function of time for various choices of the initial conditions, the target state and the choice of feedback gains.

  8. Problem 6.8 A three degree of freedom model was derived in Problem 5.28 for the robot depicted in Figures 5.32 and 5.33. The generalized coordinates for this robot are selected to be
    equation

    where images is the joint angle for revolute joint 1, and images and images are the displacements for prismatic joints 2 and 3, respectively. The generalized forces are

    equation

    where images is the actuation torque that acts as revolute joint 1, and images and images are the actuation forces that act along prismatic joints 2 and 3, respectively. Let the system parameters for this robot be images kg, images kg, images kg, images m, images m, images m, images m and images m. Derive the controller that uses PD feedback with gravity composition as described in Theorem 6.4 for this robot to achieve setpoint control. Write a program to simulate the performance of the controller. Plot the state trajectories, setpoint error, and control inputs as a function of time for various choices of the initial conditions, the target state and the choice of feedback gains.

  9. Problem 6.9 A three degree of freedom model was derived in Problem 5.29 for the robot depicted in Figures 5.32 and 5.33. The generalized coordinates for this robot are selected to be
    equation

    where images is the angle revolute joint 1, and images and images are the displacements for prismatic joints 2 and 3, respectively. The generalized forces are

    equation

    where images is the actuation torque for revolute joint 1, and images and images are the actuation forces that act at along the prismatic joints 2 and 3, respectively. Let the system parameters for this robot be images kg, images kg, images kg, images kg images , images kg images , images m, images m, images m, images m, and images m. Derive the controller that uses PD feedback with gravity composition as described in Theorem 6.4 for this robot to achieve setpoint control. Write a program to simulate the performance of the controller. Plot the state trajectories, setpoint error, and control inputs as a function of time for various choices of the initial conditions, the target state and the choice of feedback gains.

6.11.2 Problems on Computed Torque Tracking Control

  1. Problem 6.10 Consider the robot studied in Problem 6.1. Derive a tracking controller that uses the exact computed torque control with the outer loop selected to be PD feedback as in Theorem 6.3. Write a program to simulate the performance of the controller. Plot the state trajectories, tracking error, and control inputs as a function of time for various choices of the initial conditions, desired trajectories, and the choice of feedback gains.
  2. Problem 6.11 Consider the robot studied in Problem 6.2. Derive a tracking controller that uses the exact computed torque control with the outer loop selected to be PD feedback as in Theorem 6.3. Write a program to simulate the performance of the controller. Plot the state trajectories, tracking error, and control inputs as a function of time for various choices of the initial conditions, desired trajectories, and the choice of feedback gains.
  3. Problem 6.12 Consider the robot studied in Problem 6.3. Derive a tracking controller that uses the exact computed torque control with the outer loop selected to be PD feedback as in Theorem 6.3. Write a program to simulate the performance of the controller. Plot the state trajectories, tracking error, and control inputs as a function of time for various choices of the initial conditions, desired trajectories, and the choice of feedback gains.
  4. Problem 6.13 Consider the robot studied in Problem 6.4. Derive a tracking controller that uses the exact computed torque control with the outer loop selected to be PD feedback as in Theorem 6.3. Write a program to simulate the performance of the controller. Plot the state trajectories, tracking error, and control inputs as a function of time for various choices of the initial conditions, desired trajectories, and the choice of feedback gains.
  5. Problem 6.14 Consider the robot studied in Problem 6.5. Derive a tracking controller that uses the exact computed torque control with the outer loop selected to be PD feedback as in Theorem 6.3. Write a program to simulate the performance of the controller. Plot the state trajectories, tracking error, and control inputs as a function of time for various choices of the initial conditions, desired trajectories, and the choice of feedback gains.
  6. Problem 6.15 Consider the robot studied in Problem 6.6. Derive a tracking controller that uses the exact computed torque control with the outer loop selected to be PD feedback as in Theorem 6.3. Write a program to simulate the performance of the controller. Plot the state trajectories, tracking error, and control inputs as a function of time for various choices of the initial conditions, desired trajectories, and the choice of feedback gains.
  7. Problem 6.16 Consider the robot studied in Problem 6.7. Derive a tracking controller that uses the exact computed torque control with the outer loop selected to be PD feedback as in Theorem 6.3. Write a program to simulate the performance of the controller. Plot the state trajectories, tracking error, and control inputs as a function of time for various choices of the initial conditions, desired trajectories, and the choice of feedback gains.
  8. Problem 6.17 Consider the robot studied in Problem 6.8. Derive a tracking controller that uses the exact computed torque control with the outer loop selected to be PD feedback as in Theorem 6.3. Write a program to simulate the performance of the controller. Plot the state trajectories, tracking error, and control inputs as a function of time for various choices of the initial conditions, desired trajectories, and the choice of feedback gains.
  9. Problem 6.18 Consider the robot studied in Problem 6.9. Derive a tracking controller that uses the exact computed torque control with the outer loop selected to be PD feedback as in Theorem 6.3. Write a program to simulate the performance of the controller. Plot the state trajectories, tracking error, and control inputs as a function of time for various choices of the initial conditions, desired trajectories, and the choice of feedback gains.

6.11.3 Problems on Dissipativity Based Tracking Control

  1. Problem 6.19 Consider the robot studied in Problem 6.1. Derive a tracking controller that uses the controller based on dissipativity principles in Theorem 6.7. Write a program to simulate the performance of the controller. Plot the state trajectories, tracking error, and control inputs as a function of time for various choices of the initial conditions, desired trajectories, and the choice of feedback gains.
  2. Problem 6.20 Consider the robot studied in Problem 6.2. Derive a tracking controller that uses the controller based on dissipativity principles in Theorem 6.7. Write a program to simulate the performance of the controller. Plot the state trajectories, tracking error, and control inputs as a function of time for various choices of the initial conditions, desired trajectories, and the choice of feedback gains.
  3. Problem 6.21 Consider the robot studied in Problem 6.3. Derive a tracking controller that uses the controller based on dissipativity principles in Theorem 6.7. Write a program to simulate the performance of the controller. Plot the state trajectories, tracking error, and control inputs as a function of time for various choices of the initial conditions, desired trajectories, and the choice of feedback gains.
  4. Problem 6.22 Consider the robot studied in Problem 6.4. Derive a tracking controller that uses the controller based on dissipativity principles in Theorem 6.7. Write a program to simulate the performance of the controller. Plot the state trajectories, tracking error, and control inputs as a function of time for various choices of the initial conditions, desired trajectories, and the choice of feedback gains.
  5. Problem 6.23 Consider the robot studied in Problem 6.5. Derive a tracking controller that uses the controller based on dissipativity principles in Theorem 6.7. Write a program to simulate the performance of the controller. Plot the state trajectories, tracking error, and control inputs as a function of time for various choices of the initial conditions, desired trajectories, and the choice of feedback gains.
  6. Problem 6.24 Consider the robot studied in Problem 6.6. Derive a tracking controller that uses the controller based on dissipativity principles in Theorem 6.7. Write a program to simulate the performance of the controller. Plot the state trajectories, tracking error, and control inputs as a function of time for various choices of the initial conditions, desired trajectories, and the choice of feedback gains.
  7. Problem 6.25 Consider the robot studied in Problem 6.7. Derive a tracking controller that uses the controller based on dissipativity principles in Theorem 6.7. Write a program to simulate the performance of the controller. Plot the state trajectories, tracking error, and control inputs as a function of time for various choices of the initial conditions, desired trajectories, and the choice of feedback gains.
  8. Problem 6.26 Consider the robot studied in Problem 6.8. Derive a tracking controller that uses the controller based on dissipativity principles in Theorem 6.7. Write a program to simulate the performance of the controller. Plot the state trajectories, tracking error, and control inputs as a function of time for various choices of the initial conditions, desired trajectories, and the choice of feedback gains.
  9. Problem 6.27 Consider the robot studied in Problem 6.9. Derive a tracking controller that uses the controller based on dissipativity principles in Theorem 6.7. Write a program to simulate the performance of the controller. Plot the state trajectories, tracking error, and control inputs as a function of time for various choices of the initial conditions, desired trajectories, and the choice of feedback gains.