Chapter 11. Laplace Transform and Continuous Time System Design

11.1. Laplace Transform*

Introduction

The Laplace transform is a generalization of the Continuous-Time Fourier Transform. It is used because the CTFT does not converge/exist for many important signals, and yet it does for the Laplace-transform (e.g., signals with infinite l 2 norm). It is also used because it is notationally cleaner than the CTFT. However, instead of using complex exponentials of the form ⅈωt , with purely imaginary parameters, the Laplace transform uses the more general, st , where s = σ + ⅈω is complex, to analyze signals in terms of exponentially weighted sinusoids.

The Laplace Transform

Bilateral Laplace Transform Pair

Although Laplace transforms are rarely solved in practice using integration (tables and computers (e.g. Matlab) are much more common), we will provide the bilateral Laplace transform pair here for purposes of discussion and derivation. These define the forward and inverse Laplace transformations. Notice the similarities between the forward and inverse transforms. This will give rise to many of the same symmetries found in Fourier analysis.

(11.1)
Laplace Transform

(11.2)
Inverse Laplace Transform

Note

We have defined the bilateral Laplace transform. There is also a unilateral Laplace transform ,

(11.3)

which is useful for solving the difference equations with nonzero initial conditions. This is similar to the unilateral Z Transform in Discrete time.

Relation between Laplace and CTFT

Taking a look at the equations describing the Z-Transform and the Discrete-Time Fourier Transform:

(11.4)
Continuous-Time Fourier Transform

(11.5)
Laplace Transform

We can see many similarities; first, that : (Ω) = F(s) for all Ω = s

Note

the CTFT is a complex-valued function of a real-valued variable ω (and 2 π periodic). The Z-transform is a complex-valued function of a complex valued variable z.

Figure 11.1. Plots

Plots (splanefigure1.png)


Visualizing the Laplace Transform

With the Fourier transform, we had a complex-valued function of a purely imaginary variable, F(ⅈω) . This was something we could envision with two 2-dimensional plots (real and imaginary parts or magnitude and phase). However, with Laplace, we have a complex-valued function of a complex variable. In order to examine the magnitude and phase or real and imaginary parts of this function, we must examine 3-dimensional surface plots of each component.

Figure 11.2. real and imaginary sample plots

Subfigure (a) (laplace1.png)
(a) The Real part of H(s)
Subfigure (b) (laplace2.png)
(b) The Imaginary part of H(s)
Real and imaginary parts of H(s) are now each 3-dimensional surfaces.

Figure 11.3. magnitude and phase sample plots

Subfigure (a) (laplace3.png)
(a) The Magnitude of H(s)
Subfigure (b) (laplace4.png)
(b) The Phase of H(s)
Magnitude and phase of H(s) are also each 3-dimensional surfaces. This representation is more common than real and imaginary parts.

While these are legitimate ways of looking at a signal in the Laplace domain, it is quite difficult to draw and/or analyze. For this reason, a simpler method has been developed. Although it will not be discussed in detail here, the method of Poles and Zeros is much easier to understand and is the way both the Laplace transform and its discrete-time counterpart the Z-transform are represented graphically.

Using a Computer to find the Laplace Transform

Using a computer to find Laplace transforms is relatively painless. Matlab has two functions, laplace and ilaplace, that are both part of the symbolic toolbox, and will find the Laplace and inverse Laplace transforms respectively. This method is generally preferred for more complicated functions. Simpler and more contrived functions are usually found easily enough by using tables.

Laplace Transform Definition Demonstration

Figure 11.4. 

LaplaceTransformDemo
Interact (when online) with a Mathematica CDF demonstrating the Laplace Transform. To Download, right-click and save target as .cdf.


Interactive Demonstrations

Figure 11.5. Khan Lecture on Laplace

(This media type is not supported in this reader. Click to open media in browser.)
See the attached video on the basics of the Unilateral Laplace Transform from Khan Academy


Conclusion

The laplace transform proves a useful, more general form of the Continuous Time Fourier Transform. It applies equally well to describing systems as well as signals using the eigenfunction method, and to describing a larger class of signals better described using the pole-zero method.

11.2. Common Laplace Transforms*

Table 11.1.
SignalLaplace TransformRegion of Convergence
δ(t) 1 All s
δ(tT) – (sT) All s
u(t) Re(s) > 0
– (u( – t)) Re(s) < 0
t u(t) Re(s) > 0
t n u(t) Re(s) > 0
– (t n u( – t)) Re(s) < 0
– (λt) u(t) Re(s) > – λ
( – ( – (λt)))u( – t) Re(s) < – λ
t – (λt) u(t) Re(s) > – λ
t n – (λt) u(t) Re(s) > – λ
– (t n – (λt) u( – t)) Re(s) < – λ
cos(bt)u(t) Re(s) > 0
sin(bt)u(t) Re(s) > 0
– (at)cos(bt)u(t) Re(s) > – a
– (at)sin(bt)u(t) Re(s) > – a
s n All s

11.3. Properties of the Laplace Transform*

Table 11.2. Table of Laplace Transform Properties
Table of Laplace Transform properties.
PropertySignalLaplace TransformRegion of Convergence
Linearity α x 1(t) + β x 2(t) α X 1(s) + β X 2(s) At least ROC1 ⋂ ROC2
Time Shifting x(tτ) – () X(s) ROC
Frequency Shifting (modulation) ηt x(t) X(sη) Shifted ROC ( sη must be in the region of convergence)
Time Scaling x(αt) (1 − |α|)X(sα) Scaled ROC ( sα must be in the region of convergence)
Conjugation ROC
Convolution x 1(t) * x 2(t) X 1(t)X 2(t) At least ROC1 ⋂ ROC2
Time Differentiation s X(s) At least ROC
Frequency Differentiation ( – t)x(t) ROC
Integration in Time (1 − s)X(s) At least ROC ⋂ Re(s) > 0

11.4. Inverse Laplace Transform*

Introduction

When using the Laplace-transform it is often useful to be able to find h(t) given H(s) . There are at least 4 different methods to do this:

Inspection Method

This "method" is to basically become familiar with the Laplace-transform pair tables and then "reverse engineer".

Example 11.1. 

When given with an ROC of |s| > α we could determine "by inspection" that h(t) = α t u(t)


Partial-Fraction Expansion Method

When dealing with linear time-invariant systems the z-transform is often of the form This can also expressed as where c k represents the nonzero zeros of H(s) and d k represents the nonzero poles.

If M < N then H(s) can be represented as This form allows for easy inversions of each term of the sum using the inspection method and the transform table. If the numerator is a polynomial, however, then it becomes necessary to use partial-fraction expansion to put H(s) in the above form. If MN then H(s) can be expressed as

Example 11.2. 

Find the inverse z-transform of where the ROC is |s| > 2 . In this case M = N = 2 , so we have to use long division to get Next factor the denominator. Now do partial-fraction expansion. Now each term can be inverted using the inspection method and the Laplace-transform table. Thus, since the ROC is |s| > 2 ,


Demonstration of Partial Fraction Expansion

Figure 11.6. 

(This media type is not supported in this reader. Click to open media in browser.)
Interactive experiment illustrating how the Partial Fraction Expansion method is used to solve a variety of numerator and denominator problems. (To view and interact with the simulation, download the free Mathematica player at http://www.wolfram.com/products/player/download.cgi)


Figure 11.7. Khan Lecture on Partial Fraction Expansion


Power Series Expansion Method

When the z-transform is defined as a power series in the form then each term of the sequence h(t) can be determined by looking at the coefficients of the respective power of s t .

Example 11.3. 

Now look at the Laplace-transform of a finite-length sequence. In this case, since there were no poles, we multiplied the factors of H(s) . Now, by inspection, it is clear that .


One of the advantages of the power series expansion method is that many functions encountered in engineering problems have their power series' tabulated. Thus functions such as log, sin, exponent, sinh, etc, can be easily inverted.

Example 11.4. 

Suppose H(s) = log t (1 + α s -1) Noting that Then Therefore


Contour Integration Method

Without going in to much detail where r is a counter-clockwise contour in the ROC of H(s) encircling the origin of the s-plane. To further expand on this method of finding the inverse requires the knowledge of complex variable theory and thus will not be addressed in this module.

Demonstration of Contour Integration

Figure 11.8. 

(This media type is not supported in this reader. Click to open media in browser.)
Interactive experiment illustrating how the contour integral is applied on a simple example. For a more in-depth discussion of this method, some background in complex analysis is required. (To view and interact with the simulation, download the free Mathematica player at http://www.wolfram.com/products/player/download.cgi)


Conclusion

The Inverse Laplace-transform is very useful to know for the purposes of designing a filter, and there are many ways in which to calculate it, drawing from many disparate areas of mathematics. All nevertheless assist the user in reaching the desired time-domain signal that can then be synthesized in hardware(or software) for implementation in a real-world filter.

11.5. Poles and Zeros in the S-Plane*

Introduction to Poles and Zeros of the Laplace-Transform

It is quite difficult to qualitatively analyze the Laplace transform and Z-transform, since mappings of their magnitude and phase or real part and imaginary part result in multiple mappings of 2-dimensional surfaces in 3-dimensional space. For this reason, it is very common to examine a plot of a transfer function's poles and zeros to try to gain a qualitative idea of what a system does.

Once the Laplace-transform of a system has been determined, one can use the information contained in function's polynomials to graphically represent the function and easily observe many defining characteristics. The Laplace-transform will have the below structure, based on Rational Functions:

The two polynomials, P(s) and Q(s), allow us to find the poles and zeros of the Laplace-Transform.

Definition: zeros

1. The value(s) for s where .

2. The complex frequencies that make the overall gain of the filter transfer function zero.

Definition: poles

1. The value(s) for s where .

2. The complex frequencies that make the overall gain of the filter transfer function infinite.

Example 11.5. 

Below is a simple transfer function with the poles and zeros shown below it.

The zeros are: { – 1}

The poles are:


The S-Plane

Once the poles and zeros have been found for a given Laplace Transform, they can be plotted onto the S-Plane. The S-plane is a complex plane with an imaginary and real axis referring to the complex-valued variable z . The position on the complex plane is given by r ⅈθ and the angle from the positive, real axis around the plane is denoted by θ . When mapping poles and zeros onto the plane, poles are denoted by an "x" and zeros by an "o". The below figure shows the S-Plane, and examples of plotting zeros and poles onto the plane can be found in the following section.

Figure 11.9. S-Plane

S-Plane (splane.png)

Examples of Pole/Zero Plots

This section lists several examples of finding the poles and zeros of a transfer function and then plotting them onto the S-Plane.

Example 11.6. Simple Pole/Zero Plot

The zeros are: {0}

The poles are:

Figure 11.10. Pole/Zero Plot

Pole/Zero Plot (sp_eg1.png)
Using the zeros and poles found from the transfer function, the one zero is mapped to zero and the two poles are placed at and


Example 11.7. Complex Pole/Zero Plot

The zeros are: {, – }

The poles are:

Figure 11.11. Pole/Zero Plot

Pole/Zero Plot (sp_eg2.png)
Using the zeros and poles found from the transfer function, the zeros are mapped to ± , and the poles are placed at – 1, and


Example 11.8. Pole-Zero Cancellation

An easy mistake to make with regards to poles and zeros is to think that a function like is the same as s + 3 . In theory they are equivalent, as the pole and zero at s = 1 cancel each other out in what is known as pole-zero cancellation. However, think about what may happen if this were a transfer function of a system that was created with physical circuits. In this case, it is very unlikely that the pole and zero would remain in exactly the same place. A minor temperature change, for instance, could cause one of them to move just slightly. If this were to occur a tremendous amount of volatility is created in that area, since there is a change from infinity at the pole to zero at the zero in a very small range of signals. This is generally a very bad way to try to eliminate a pole. A much better way is to use control theory to move the pole to a better place.


Repeated Poles and Zeros

It is possible to have more than one pole or zero at any given point. For instance, the discrete-time transfer function H(z) = z 2 will have two zeros at the origin and the continuous-time function will have 25 poles at the origin.

MATLAB - If access to MATLAB is readily available, then you can use its functions to easily create pole/zero plots. Below is a short program that plots the poles and zeros from the above example onto the Z-Plane.

	
	% Set up vector for zeros
	z = [j ; -j];

	% Set up vector for poles
	p = [-1 ; .5+.5j ; .5-.5j];

	figure(1);
	zplane(z,p);
	title('Pole/Zero Plot for Complex Pole/Zero Plot Example');
	
      

Interactive Demonstration of Poles and Zeros

Figure 11.12. 

Pole-ZeroDrillDemo
Interact (when online) with a Mathematica CDF demonstrating Pole/Zero Plots. To Download, right-click and save target as .cdf.


Applications for pole-zero plots

Stability and Control theory

Now that we have found and plotted the poles and zeros, we must ask what it is that this plot gives us. Basically what we can gather from this is that the magnitude of the transfer function will be larger when it is closer to the poles and smaller when it is closer to the zeros. This provides us with a qualitative understanding of what the system does at various frequencies and is crucial to the discussion of stability.

Pole/Zero Plots and the Region of Convergence

The region of convergence (ROC) for X(z) in the complex Z-plane can be determined from the pole/zero plot. Although several regions of convergence may be possible, where each one corresponds to a different impulse response, there are some choices that are more practical. A ROC can be chosen to make the transfer function causal and/or stable depending on the pole/zero plot.

Filter Properties from ROC

  • If the ROC extends outward from the outermost pole, then the system is causal.

  • If the ROC includes the unit circle, then the system is stable.

Below is a pole/zero plot with a possible ROC of the Z-transform in the Simple Pole/Zero Plot discussed earlier. The shaded region indicates the ROC chosen for the filter. From this figure, we can see that the filter will be both causal and stable since the above listed conditions are both met.

Example 11.9. 

Figure 11.13. Region of Convergence for the Pole/Zero Plot

Region of Convergence for the Pole/Zero Plot (sp_roc.png)
The shaded area represents the chosen ROC for the transfer function.


Frequency Response and Pole/Zero Plots

The reason it is helpful to understand and create these pole/zero plots is due to their ability to help us easily design a filter. Based on the location of the poles and zeros, the magnitude response of the filter can be quickly understood. Also, by starting with the pole/zero plot, one can design a filter and obtain its transfer function very easily.

Conclusion

Pole-Zero Plots are clearly quite useful in the study of the Laplace and Z transform, affording us a method of visualizing the at times confusing mathematical functions.

11.6. Region of Convergence for the Laplace Transform*

Introduction

With the Laplace transform, the s-plane represents a set of signals (complex exponentials). For any given LTI system, some of these signals may cause the output of the system to converge, while others cause the output to diverge ("blow up"). The set of signals that cause the system's output to converge lie in the region of convergence (ROC). This module will discuss how to find this region of convergence for any continuous-time, LTI system.

The Region of Convergence

The region of convergence, known as the ROC, is important to understand because it defines the region where the Laplace transform exists. The Laplace transform of a sequence is defined as The ROC for a given h(t) , is defined as the range of t for which the Laplace transform converges. If we consider a causal, complex exponential, h(t) = – (at) u(t) , we get the equation, Evaluating this, we get Notice that this equation will tend to infinity when tends to infinity. To understand when this happens, we take one more step by using s = σ + ⅈω to realize this equation as Recognizing that – (ⅈωt) is sinusoidal, it becomes apparent that – (σ(a)t) is going to determine whether this blows up or not. What we find is that if σ + a is positive, the exponential will be to a negative power, which will cause it to go to zero as t tends to infinity. On the other hand, if σ + a is negative or zero, the exponential will not be to a negative power, which will prevent it from tending to zero and the system will not converge. What all of this tells us is that for a causal signal, we have convergence when

(11.6)
Condition for Convergence
Re(s) > – a

Alternatively, we can note that since the Laplace transform is a power series, it converges when h(t) – (st) is absolutely summable. Therefore, must be satisfied for convergence.

Although we will not go through the process again for anticausal signals, we could. In doing so, we would find that the necessary condition for convergence is when

(11.7)
Necessary Condition for Anti-Causal Convergence
Re(s) < – a

Properties of the Region of Convergencec

The Region of Convergence has a number of properties that are dependent on the characteristics of the signal, h(t) .

  • The ROC cannot contain any poles. By definition a pole is a where H(s) is infinite. Since H(s) must be finite for all s for convergence, there cannot be a pole in the ROC.

  • If h(t) is a finite-duration sequence, then the ROC is the entire s-plane, except possibly s = 0 or |s| = ∞ . A finite-duration sequence is a sequence that is nonzero in a finite interval t1t ≤ t2 . As long as each value of h(t) is finite then the sequence will be absolutely summable. When t 2 > 0 there will be a s -1 term and thus the ROC will not include s = 0 . When t 1 < 0 then the sum will be infinite and thus the ROC will not include |s| = ∞ . On the other hand, when t 2 ≤ 0 then the ROC will include s = 0 , and when t 1 ≥ 0 the ROC will include |s| = ∞ . With these constraints, the only signal, then, whose ROC is the entire z-plane is h(t) = c δ(t) .

Figure 11.14. 

Figure (finite.png)
An example of a finite duration sequence.

The next properties apply to infinite duration sequences. As noted above, the z-transform converges when |H(s)| < ∞ . So we can write We can then split the infinite sum into positive-time and negative-time portions. So |H(s)| ≤ N(s) + P(s) where and In order for |H(s)| to be finite, |h(t)| must be bounded. Let us then set |h(t)| ≤ C 1 r 1 t for t < 0 and |h(t)| ≤ C 2 r 2 t for t ≥ 0 From this some further properties can be derived:

  • If h(t) is a right-sided sequence, then the ROC extends outward from the outermost pole in H(s) . A right-sided sequence is a sequence where h(t) = 0 for t < t1 < ∞ . Looking at the positive-time portion from the above derivation, it follows that Thus in order for this integral to converge, | s | > r 2 , and therefore the ROC of a right-sided sequence is of the form | s | > r 2 .

Figure 11.15. 

Figure (rtsided1.png)
A right-sided sequence.

Figure 11.16. 

Figure (rtsided2.png)
The ROC of a right-sided sequence.

  • If h(t) is a left-sided sequence, then the ROC extends inward from the innermost pole in H(s) . A left-sided sequence is a sequence where h(t) = 0 for t > t1 > – ∞ . Looking at the negative-time portion from the above derivation, it follows that Thus in order for this integral to converge, | s | < r 1 , and therefore the ROC of a left-sided sequence is of the form | s | < r 1 .

Figure 11.17. 

Figure (lefsided1.png)
A left-sided sequence.

Figure 11.18. 

Figure (lefsided2.png)
The ROC of a left-sided sequence.

  • If h(t) is a two-sided sequence, the ROC will be a ring in the z-plane that is bounded on the interior and exterior by a pole. A two-sided sequence is an sequence with infinite duration in the positive and negative directions. From the derivation of the above two properties, it follows that if r 2 < | s | < r 2 converges, then both the positive-time and negative-time portions converge and thus H(s) converges as well. Therefore the ROC of a two-sided sequence is of the form -r2 < | s | < r 2 .

Figure 11.19. 

Figure (twosided1.png)
A two-sided sequence.

Figure 11.20. 

Figure (twosided2.png)
The ROC of a two-sided sequence.

Examples

To gain further insight it is good to look at a couple of examples.

Example 11.10. 

Lets take The Laplace-transform of is with an ROC at .

Figure 11.21. 

Figure (ex1roc1a.png)
The ROC of

The z-transform of is with an ROC at .

Figure 11.22. 

Figure (ex1roc1b.png)
The ROC of

Due to linearity, By observation it is clear that there are two zeros, at 0 and , and two poles, at , and . Following the above properties, the ROC is .

Figure 11.23. 

Figure (ex1roc2.png)
The ROC of


Example 11.11. 

Now take The z-transform and ROC of was shown in the example above. The Laplace-transorm of is with an ROC at .

Figure 11.24. 

Figure (ex2roc1.png)
The ROC of

Once again, by linearity, By observation it is again clear that there are two zeros, at 0 and , and two poles, at , and . in ths case though, the ROC is .

Figure 11.25. 

Figure (ex2roc2.png)
The ROC of .


Graphical Understanding of ROC

Using the demonstration, learn about the region of convergence for the Laplace Transform.

Conclusion

Clearly, in order to craft a system that is actually useful by virtue of being causal and BIBO stable, we must ensure that it is within the Region of Convergence, which can be ascertained by looking at the pole zero plot. The Region of Convergence is the area in the pole/zero plot of the transfer function in which the function exists. For purposes of useful filter design, we prefer to work with rational functions, which can be described by two polynomials, one each for determining the poles and the zeros, respectively.

11.7. Rational Functions and the Laplace Transform*

Introduction

When dealing with operations on polynomials, the term rational function is a simple way to describe a particular relationship between two polynomials.

Definition: rational function

For any two polynomials, A and B, their quotient is called a rational function.

Example . 

Below is a simple example of a basic rational function, f(x) . Note that the numerator and denominator can be polynomials of any order, but the rational function is undefined when the denominator equals zero.


Properties of Rational Functions

In order to see what makes rational functions special, let us look at some of their basic properties and characteristics. If you are familiar with rational functions and basic algebraic properties, skip to the next section to see how rational functions are useful when dealing with the Laplace transform.

Roots

To understand many of the following characteristics of a rational function, one must begin by finding the roots of the rational function. In order to do this, let us factor both of the polynomials so that the roots can be easily determined. Like all polynomials, the roots will provide us with information on many key properties. The function below shows the results of factoring the above rational function, Equation.

(11.8)

Thus, the roots of the rational function are as follows:

Roots of the numerator are: {-2, 2}

Roots of the denominator are: {-3, 1}

Note

In order to understand rational functions, it is essential to know and understand the roots that make up the rational function.

Discontinuities

Because we are dealing with division of two polynomials, we must be aware of the values of the variable that will cause the denominator of our fraction to be zero. When this happens, the rational function becomes undefined, i.e. we have a discontinuity in the function. Because we have already solved for our roots, it is very easy to see when this occurs. When the variable in the denominator equals any of the roots of the denominator, the function becomes undefined.

Example 11.13. 

Continuing to look at our rational function above, Equation, we can see that the function will have discontinuities at the following points:


In respect to the Cartesian plane, we say that the discontinuities are the values along the x-axis where the function is undefined. These discontinuities often appear as vertical asymptotes on the graph to represent the values where the function is undefined.

Domain

Using the roots that we found above, the domain of the rational function can be easily defined.

Definition: domain

The group, or set, of values that are defined by a given function.

Example . 

Using the rational function above, Equation, the domain can be defined as any real number x where x does not equal 1 or negative 3. Written out mathematically, we get the following:


Intercepts

The x-intercept is defined as the point(s) where f(x) , i.e. the output of the rational functions, equals zero. Because we have already found the roots of the equation this process is very simple. From algebra, we know that the output will be zero whenever the numerator of the rational function is equal to zero. Therefore, the function will have an x-intercept wherever x equals one of the roots of the numerator.

The y-intercept occurs whenever x equals zero. This can be found by setting all the values of x equal to zero and solving the rational function.

Rational Functions and the Laplace Transform

Rational functions often result when the Laplace transform is used to compute transfer functions for LTI systems. When using the Laplace transform to solve linear constant coefficient ordinary differential equations, partial fraction expansions of rational functions prove particularly useful. The roots of the polynomials in the numerator and denominator of the transfer function play an important role in describing system behavior. The roots of the polynomial in the numerator produce zeros of the transfer function where the system produces no output for an input of that complex frequency. The roots of the polynomial in the denominator produce poles of the transfer function where the system has natural frequencies of oscillation.

Summary

Once we have used our knowledge of rational functions to find its roots, we can manipulate a Laplace transform in a number of useful ways. We can apply this knowledge by representing an LTI system graphically through a pole-zero plot for analysis or design.

11.8. Differential Equations*

Differential Equations

It is often useful to describe systems using equations involving the rate of change in some quantity through differential equations. Recall that one important subclass of differential equations, linear constant coefficient ordinary differential equations, takes the form

(11.9) A y ( t ) = x ( t )

where A is a differential operator of the form

(11.10)

The differential equation in Equation 11.9 would describe some system modeled by A with an input forcing function x(t) that produces an output solution signal y(t). However, the unilateral Laplace transform permits a solution for initial value problems to be found in what is usually a much simpler method. Specifically, it greatly simplifies the procedure for nonhomogeneous differential equations.

General Formulas for the Differential Equation

As stated briefly in the definition above, a differential equation is a very useful tool in describing and calculating the change in an output of a system described by the formula for a given input. The key property of the differential equation is its ability to help easily find the transform, H(s) , of a system. In the following two subsections, we will look at the general form of the differential equation and the general conversion to a Laplace-transform directly from the differential equation.

Conversion to Laplace-Transform

Using the definition, ???, we can easily generalize the transfer function, H(s) , for any differential equation. Below are the steps taken to convert any differential equation into its transfer function, i.e. Laplace-transform. The first step involves taking the Fourier Transform of all the terms in ???. Then we use the linearity property to pull the transform inside the summation and the time-shifting property of the Laplace-transform to change the time-shifting terms to exponentials. Once this is done, we arrive at the following equation: a 0 = 1 .

Conversion to Frequency Response

Once the Laplace-transform has been calculated from the differential equation, we can go one step further to define the frequency response of the system, or filter, that is being represented by the differential equation.

Note

Remember that the reason we are dealing with these formulas is to be able to aid us in filter design. A LCCDE is one of the easiest ways to represent FIR filters. By being able to find the frequency response, we will be able to look at the basic properties of any filter represented by a simple LCCDE.

Below is the general formula for the frequency response of a Laplace-transform. The conversion is simply a matter of taking the Laplace-transform formula, H(s) , and replacing every instance of s with ⅈw . Once you understand the derivation of this formula, look at the module concerning Filter Design from the Laplace-Transform for a look into how all of these ideas of the Laplace-transform, Differential Equation, and Pole/Zero Plots play a role in filter design.

Solving a LCCDE

In order for a linear constant-coefficient difference equation to be useful in analyzing a LTI system, we must be able to find the systems output based upon a known input, x(t) , and a set of initial conditions. Two common methods exist for solving a LCCDE: the direct method and the indirect method, the latter being based on the Laplace-transform. Below we will briefly discuss the formulas for solving a LCCDE using each of these methods.

Direct Method

The final solution to the output based on the direct method is the sum of two parts, expressed in the following equation: y(t) = y h (t) + y p (t) The first part, y h (t) , is referred to as the homogeneous solution and the second part, y h (t) , is referred to as particular solution. The following method is very similar to that used to solve many differential equations, so if you have taken a differential calculus course or used differential equations before then this should seem very familiar.

Homogeneous Solution

We begin by assuming that the input is zero, x(t) = 0 . Now we simply need to solve the homogeneous differential equation: In order to solve this, we will make the assumption that the solution is in the form of an exponential. We will use lambda, λ , to represent our exponential terms. We now have to solve the following equation: We can expand this equation out and factor out all of the lambda terms. This will give us a large polynomial in parenthesis, which is referred to as the characteristic polynomial. The roots of this polynomial will be the key to solving the homogeneous equation. If there are all distinct roots, then the general solution to the equation will be as follows: y h (t) = C 1(λ 1) t + C 2(λ 2) t + + C N (λ N ) t However, if the characteristic equation contains multiple roots then the above general solution will be slightly different. Below we have the modified version for an equation where λ 1 has K multiple roots: y h (t) = C 1(λ 1) t + C 1 t(λ 1) t + C 1 t 2(λ 1) t + + C 1 t K − 1(λ 1) t + C 2(λ 2) t + + C N (λ N ) t

Particular Solution

The particular solution, y p (t) , will be any solution that will solve the general differential equation: In order to solve, our guess for the solution to y p (t) will take on the form of the input, x(t) . After guessing at a solution to the above equation involving the particular solution, one only needs to plug the solution into the differential equation and solve it out.

Indirect Method

The indirect method utilizes the relationship between the differential equation and the Laplace-transform, discussed earlier, to find a solution. The basic idea is to convert the differential equation into a Laplace-transform, as described above, to get the resulting output, Y(s) . Then by inverse transforming this and using partial-fraction expansion, we can arrive at the solution.

(11.11)

This can be interatively extended to an arbitrary order derivative as in Equation Equation 11.12.

(11.12)

Now, the Laplace transform of each side of the differential equation can be taken

(11.13)

which by linearity results in

(11.14)

and by differentiation properties in

(11.15)

Rearranging terms to isolate the Laplace transform of the output,

(11.16)

Thus, it is found that

(11.17)

In order to find the output, it only remains to find the Laplace transform X(s) of the input, substitute the initial conditions, and compute the inverse Laplace transform of the result. Partial fraction expansions are often required for this last step. This may sound daunting while looking at Equation Equation 11.17, but it is often easy in practice, especially for low order differential equations. Equation Equation 11.17 can also be used to determine the transfer function and frequency response.

As an example, consider the differential equation

(11.18)

with the initial conditions y ' (0) = 1 and y(0) = 0 Using the method described above, the Laplace transform of the solution y(t) is given by

(11.19)

Performing a partial fraction decomposition, this also equals

(11.20)

Computing the inverse Laplace transform,

(11.21)

One can check that this satisfies that this satisfies both the differential equation and the initial conditions.

Summary

One of the most important concepts of DSP is to be able to properly represent the input/output relationship to a given LTI system. A linear constant-coefficient difference equation (LCCDE) serves as a way to express just this relationship in a discrete-time system. Writing the sequence of inputs and outputs, which represent the characteristics of the LTI system, as a difference equation helps in understanding and manipulating a system.

11.9. Continuous Time Filter Design*

Introduction

Analog (Continuous-Time) filters are useful for a wide variety of applications, and are especially useful in that they are very simple to build using standard, passive R,L,C components. Having a grounding in basic filter design theory can assist one in solving a wide variety of signal processing problems.

Estimating Frequency Response from Z-Plane

One of the motivating factors for analyzing the pole/zero plots is due to their relationship to the frequency response of the system. Based on the position of the poles and zeros, one can quickly determine the frequency response. This is a result of the correspondence between the frequency response and the transfer function evaluated on the unit circle in the pole/zero plots. The frequency response, or DTFT, of the system is defined as: Next, by factoring the transfer function into poles and zeros and multiplying the numerator and denominator by ⅈw we arrive at the following equations: From Equation we have the frequency response in a form that can be used to interpret physical characteristics about the filter's frequency response. The numerator and denominator contain a product of terms of the form | ⅈw h|, where h is either a zero, denoted by c k or a pole, denoted by d k . Vectors are commonly used to represent the term and its parts on the complex plane. The pole or zero, h , is a vector from the origin to its location anywhere on the complex plane and ⅈw is a vector from the origin to its location on the unit circle. The vector connecting these two points, | ⅈw h|, connects the pole or zero location to a place on the unit circle dependent on the value of w . From this, we can begin to understand how the magnitude of the frequency response is a ratio of the distances to the poles and zero present in the z-plane as w goes from zero to pi. These characteristics allow us to interpret |H(w)| as follows: In conclusion, using the distances from the unit circle to the poles and zeros, we can plot the frequency response of the system. As w goes from 0 to 2π , the following two properties, taken from the above equations, specify how one should draw |H(w)|.

While moving around the unit circle...

  1. if close to a zero, then the magnitude is small. If a zero is on the unit circle, then the frequency response is zero at that point.

  2. if close to a pole, then the magnitude is large. If a pole is on the unit circle, then the frequency response goes to infinity at that point.

Drawing Frequency Response from Pole/Zero Plot

Let us now look at several examples of determining the magnitude of the frequency response from the pole/zero plot of a z-transform. If you have forgotten or are unfamiliar with pole/zero plots, please refer back to the Pole/Zero Plots module.

Example 11.15. 

In this first example we will take a look at the very simple z-transform shown below: H(z) = z + 1 = 1 + z -1 H(w) = 1 + – (ⅈw) For this example, some of the vectors represented by | ⅈw h|, for random values of w , are explicitly drawn onto the complex plane shown in the figure below. These vectors show how the amplitude of the frequency response changes as w goes from 0 to 2π , and also show the physical meaning of the terms in Equation above. One can see that when w = 0, the vector is the longest and thus the frequency response will have its largest amplitude here. As w approaches π , the length of the vectors decrease as does the amplitude of |H(w)|. Since there are no poles in the transform, there is only this one vector term rather than a ratio as seen in Equation.

Figure 11.26. Pole/Zero Plot

Pole/Zero Plot (filt_eg1_pz.jpg)
(a)
Frequency Response: |H(w)| (filt_eg1_fig.jpg)
(b)
The first figure represents the pole/zero plot with a few representative vectors graphed while the second shows the frequency response with a peak at +2 and graphed between plus and minus π .


Example 11.16. 

For this example, a more complex transfer function is analyzed in order to represent the system's frequency response.

Below we can see the two figures described by the above equations. The Figure 11.27 represents the basic pole/zero plot of the z-transform, H(w). Figure 11.27 shows the magnitude of the frequency response. From the formulas and statements in the previous section, we can see that when w = 0 the frequency will peak since it is at this value of w that the pole is closest to the unit circle. The ratio from Equation helps us see the mathematics behind this conclusion and the relationship between the distances from the unit circle and the poles and zeros. As w moves from 0 to π , we see how the zero begins to mask the effects of the pole and thus force the frequency response closer to 0.

Figure 11.27. Pole/Zero Plot

Pole/Zero Plot (filt_eg2_pz.jpg)
(a)
Frequency Response: |H(w)| (filt_eg2_freq.jpg)
(b)
The first figure represents the pole/zero plot while the second shows the frequency response with a peak at +2 and graphed between plus and minus π .


Types of Filters

Butterworth Filters

The Butterworth filter is the simplest filter. It can be constructed out of passive R, L, C circuits. The magnitude of the transfer function for this filter is

(11.22)
Magnitude of Butterworth Filter Transfer Function

where n is the order of the filter and ω c is the cutoff frequency. The cutoff frequency is the frequency where the magnitude experiences a 3 dB dropoff (where ).

Figure 11.28. 

Figure (bwFreq2a.jpg)
Three different orders of lowpass Butterworth analog filters: n = {1, 4, 10} . As n increases, the filter more closely approximates an ideal brickwall lowpass response.

The important aspects of Figure 11.28 are that it does not ripple in the passband or stopband as other filters tend to, and that the larger n , the sharper the cutoff (the smaller the transition band).

Butterworth filters give transfer functions ( H(ⅈω) and H(s) ) that are rational functions. They also have only poles, resulting in a transfer function of the form and a pole-zero plot of

Figure 11.29. 

Figure (bwSPlane2a.jpg)
Poles of a 10th-order ( n = 5 ) lowpass Butterworth filter.

Note that the poles lie along a circle in the s-plane.

Chebyshev Filters

The Butterworth filter does not give a sufficiently good approximation across the complete passband in many cases. The Taylor's series approximation is often not suited to the way specifications are given for filters. An alternate error measure is the maximum of the absolute value of the difference between the actual filter response and the ideal. This is considered over the total passband. This is the Chebyshev error measure and was defined and applied to the FIR filter design problem. For the IIR filter, the Chebyshev error is minimized over the passband and a Taylor's series approximation at ω = ∞ is used to determine the stopband performance. This mixture of methods in the IIR case is called the Chebyshev filter, and simple design formulas result, just as for the Butterworth filter.

The design of Chebyshev filters is particularly interesting, because the results of a very elegant theory insure that constructing a frequency-response function with the proper form of equal ripple in the error will result in a minimum Chebyshev error without explicitly minimizing anything. This allows a straightforward set of design formulas to be derived which can be viewed as a generalization of the Butterworth formulas ???, ???.

The form for the magnitude squared of the frequency-response function for the Chebyshev filter is

(11.23)

where C N (ω) is an Nth-order Chebyshev polynomial and ϵ is a parameter that controls the ripple size. This polynomial in ω has very special characteristics that result in the optimality of the response function (Equation 11.23).

Figure 11.30. 

Fifth Order Chebyshev Filter Frequency Response


Bessel filters

Insert bessel filter information

Elliptic Filters

There is yet another method that has been developed that uses a Chebyshev error criterion in both the passband and the stopband. This is the fourth possible combination of Chebyshev and Taylor's series approximations in the passband and stopband. The resulting filter is called an elliptic-function filter, because elliptic functions are normally used to calculate the pole and zero locations. It is also sometimes called a Cauer filter or a rational Chebyshev filter, and it has equal ripple approximation error in both pass and stopbands ???, ???, ???, ???.

The error criteria of the elliptic-function filter are particularly well suited to the way specifications for filters are often given. For that reason, use of the elliptic-function filter design usually gives the lowest order filter of the four classical filter design methods for a given set of specifications. Unfortunately, the design of this filter is the most complicated of the four. However, because of the efficiency of this class of filters, it is worthwhile gaining some understanding of the mathematics behind the design procedure.

This section sketches an outline of the theory of elliptic- function filter design. The details and properties of the elliptic functions themselves should simply be accepted, and attention put on understanding the overall picture. A more complete development is available in ???, ???.

Because both the passband and stopband approximations are over the entire bands, a transition band between the two must be defined. Using a normalized passband edge, the bands are defined by

(11.24)
(11.25)
(11.26)

This is illustrated in Figure .

Figure 11.31. 

Third Order Analog Elliptic Function Lowpass Filter showing the Ripples and Band Edges

The characteristics of the elliptic function filter are best described in terms of the four parameters that specify the frequency response:

  1. The maximum variation or ripple in the passband δ 1 ,

  2. The width of the transition band ,

  3. The maximum response or ripple in the stopband δ 2 , and

  4. The order of the filter N .

The result of the design is that for any three of the parameters given, the fourth is minimum. This is a very flexible and powerful description of a filter frequency response.

The form of the frequency-response function is a generalization of that for the Chebyshev filter

(11.27)

where

(11.28) F F ( s ) = F ( s ) F ( – s )

with F(s) being the prototype analog filter transfer function similar to that for the Chebyshev filter. G(ω) is a rational function that approximates zero in the passband and infinity in the stopband. The definition of this function is a generalization of the definition of the Chebyshev polynomial.

Filter Design Demonstration

Conclusion

As can be seen, there is a large amount of information available in filter design, more than an introductory module can cover. Even for designing Discrete-time IIR filters, it is important to remember that there is a far larger body of literature for design methods for the analog signal processing world than there is for the digital. Therefore, it is often easier and more practical to implement an IIR filter using standard analog methods, and then discretize it using methods such as the Bilateral Transform.