The Laplace transform is a generalization of the Continuous-Time Fourier Transform. It is used because the CTFT does not converge/exist for many important signals, and yet it does for the Laplace-transform (e.g., signals with infinite l 2 norm). It is also used because it is notationally cleaner than the CTFT. However, instead of using complex exponentials of the form ⅇ ⅈωt , with purely imaginary parameters, the Laplace transform uses the more general, ⅇ st , where s = σ + ⅈω is complex, to analyze signals in terms of exponentially weighted sinusoids.
Although Laplace transforms are rarely solved in practice using integration (tables and computers (e.g. Matlab) are much more common), we will provide the bilateral Laplace transform pair here for purposes of discussion and derivation. These define the forward and inverse Laplace transformations. Notice the similarities between the forward and inverse transforms. This will give rise to many of the same symmetries found in Fourier analysis.
We have defined the bilateral Laplace transform. There is also a unilateral Laplace transform ,
which is useful for solving the difference equations with nonzero initial conditions. This is similar to the unilateral Z Transform in Discrete time.
Taking a look at the equations describing the Z-Transform and the Discrete-Time Fourier Transform:
We can see many similarities; first, that : ℱ(Ω) = F(s) for all Ω = s
the CTFT is a complex-valued function of a real-valued variable ω (and 2 π periodic). The Z-transform is a complex-valued function of a complex valued variable z.
Figure 11.1. Plots
With the Fourier transform, we had a complex-valued function of a purely imaginary variable, F(ⅈω) . This was something we could envision with two 2-dimensional plots (real and imaginary parts or magnitude and phase). However, with Laplace, we have a complex-valued function of a complex variable. In order to examine the magnitude and phase or real and imaginary parts of this function, we must examine 3-dimensional surface plots of each component.
Figure 11.2. real and imaginary sample plots
Figure 11.3. magnitude and phase sample plots
While these are legitimate ways of looking at a signal in the Laplace domain, it is quite difficult to draw and/or analyze. For this reason, a simpler method has been developed. Although it will not be discussed in detail here, the method of Poles and Zeros is much easier to understand and is the way both the Laplace transform and its discrete-time counterpart the Z-transform are represented graphically.
Using a computer to find Laplace transforms is relatively
painless. Matlab has two functions,
laplace
and
ilaplace
, that are both part of the
symbolic toolbox, and will find the Laplace and inverse
Laplace transforms respectively. This method is generally
preferred for more complicated functions. Simpler and more
contrived functions are usually found easily enough by using tables.
Figure 11.4.
Figure 11.5. Khan Lecture on Laplace
The laplace transform proves a useful, more general form of the Continuous Time Fourier Transform. It applies equally well to describing systems as well as signals using the eigenfunction method, and to describing a larger class of signals better described using the pole-zero method.
Signal | Laplace Transform | Region of Convergence |
---|---|---|
δ(t) | 1 | All s |
δ(t − T) | ⅇ – (sT) | All s |
u(t) | Re(s) > 0 | |
– (u( – t)) | Re(s) < 0 | |
t u(t) | Re(s) > 0 | |
t n u(t) | Re(s) > 0 | |
– (t n u( – t)) | Re(s) < 0 | |
ⅇ – (λt) u(t) | Re(s) > – λ | |
( – (ⅇ – (λt)))u( – t) | Re(s) < – λ | |
t ⅇ – (λt) u(t) | Re(s) > – λ | |
t n ⅇ – (λt) u(t) | Re(s) > – λ | |
– (t n ⅇ – (λt) u( – t)) | Re(s) < – λ | |
cos(bt)u(t) | Re(s) > 0 | |
sin(bt)u(t) | Re(s) > 0 | |
ⅇ – (at)cos(bt)u(t) | Re(s) > – a | |
ⅇ – (at)sin(bt)u(t) | Re(s) > – a | |
s n | All s |
Property | Signal | Laplace Transform | Region of Convergence |
---|---|---|---|
Linearity | α x 1(t) + β x 2(t) | α X 1(s) + β X 2(s) | At least ROC1 ⋂ ROC2 |
Time Shifting | x(t − τ) | ⅇ – (sτ) X(s) | ROC |
Frequency Shifting (modulation) | ⅇ ηt x(t) | X(s − η) | Shifted ROC ( s − η must be in the region of convergence) |
Time Scaling | x(αt) | (1 − |α|)X(s − α) | Scaled ROC ( s − α must be in the region of convergence) |
Conjugation | ROC | ||
Convolution | x 1(t) * x 2(t) | X 1(t)X 2(t) | At least ROC1 ⋂ ROC2 |
Time Differentiation | s X(s) | At least ROC | |
Frequency Differentiation | ( – t)x(t) | ROC | |
Integration in Time | (1 − s)X(s) | At least ROC ⋂ Re(s) > 0 |
When using the Laplace-transform it is often useful to be able to find h(t) given H(s) . There are at least 4 different methods to do this:
This "method" is to basically become familiar with the Laplace-transform pair tables and then "reverse engineer".
Example 11.1.
When given with an ROC of |s| > α we could determine "by inspection" that h(t) = α t u(t)
When dealing with linear time-invariant systems the z-transform is often of the form This can also expressed as where c k represents the nonzero zeros of H(s) and d k represents the nonzero poles.
If M < N then H(s) can be represented as This form allows for easy inversions of each term of the sum using the inspection method and the transform table. If the numerator is a polynomial, however, then it becomes necessary to use partial-fraction expansion to put H(s) in the above form. If M ≥ N then H(s) can be expressed as
Example 11.2.
Find the inverse z-transform of where the ROC is |s| > 2 . In this case M = N = 2 , so we have to use long division to get Next factor the denominator. Now do partial-fraction expansion. Now each term can be inverted using the inspection method and the Laplace-transform table. Thus, since the ROC is |s| > 2 ,
Figure 11.7. Khan Lecture on Partial Fraction Expansion
When the z-transform is defined as a power series in the form then each term of the sequence h(t) can be determined by looking at the coefficients of the respective power of s – t .
Example 11.3.
Now look at the Laplace-transform of a finite-length sequence. In this case, since there were no poles, we multiplied the factors of H(s) . Now, by inspection, it is clear that .
One of the advantages of the power series expansion method is that many functions encountered in engineering problems have their power series' tabulated. Thus functions such as log, sin, exponent, sinh, etc, can be easily inverted.
Example 11.4.
Suppose H(s) = log t (1 + α s -1) Noting that Then Therefore
Without going in to much detail where r is a counter-clockwise contour in the ROC of H(s) encircling the origin of the s-plane. To further expand on this method of finding the inverse requires the knowledge of complex variable theory and thus will not be addressed in this module.
The Inverse Laplace-transform is very useful to know for the purposes of designing a filter, and there are many ways in which to calculate it, drawing from many disparate areas of mathematics. All nevertheless assist the user in reaching the desired time-domain signal that can then be synthesized in hardware(or software) for implementation in a real-world filter.
It is quite difficult to qualitatively analyze the Laplace transform and Z-transform, since mappings of their magnitude and phase or real part and imaginary part result in multiple mappings of 2-dimensional surfaces in 3-dimensional space. For this reason, it is very common to examine a plot of a transfer function's poles and zeros to try to gain a qualitative idea of what a system does.
Once the Laplace-transform of a system has been determined, one can use the information contained in function's polynomials to graphically represent the function and easily observe many defining characteristics. The Laplace-transform will have the below structure, based on Rational Functions:
The two polynomials, P(s) and Q(s), allow us to find the poles and zeros of the Laplace-Transform.
1. The value(s) for s where .
2. The complex frequencies that make the overall gain of the filter transfer function zero.
1. The value(s) for s where .
2. The complex frequencies that make the overall gain of the filter transfer function infinite.
Example 11.5.
Below is a simple transfer function with the poles and zeros shown below it.
The zeros are: { – 1}
The poles are:
Once the poles and zeros have been found for a given Laplace Transform, they can be plotted onto the S-Plane. The S-plane is a complex plane with an imaginary and real axis referring to the complex-valued variable z . The position on the complex plane is given by r ⅇ ⅈθ and the angle from the positive, real axis around the plane is denoted by θ . When mapping poles and zeros onto the plane, poles are denoted by an "x" and zeros by an "o". The below figure shows the S-Plane, and examples of plotting zeros and poles onto the plane can be found in the following section.
Figure 11.9. S-Plane
This section lists several examples of finding the poles and zeros of a transfer function and then plotting them onto the S-Plane.
Example 11.6. Simple Pole/Zero Plot
The zeros are: {0}
The poles are:
Figure 11.10. Pole/Zero Plot
Example 11.7. Complex Pole/Zero Plot
The zeros are: {ⅈ, – ⅈ}
The poles are:
Figure 11.11. Pole/Zero Plot
Example 11.8. Pole-Zero Cancellation
An easy mistake to make with regards to poles and zeros is to think that a function like is the same as s + 3 . In theory they are equivalent, as the pole and zero at s = 1 cancel each other out in what is known as pole-zero cancellation. However, think about what may happen if this were a transfer function of a system that was created with physical circuits. In this case, it is very unlikely that the pole and zero would remain in exactly the same place. A minor temperature change, for instance, could cause one of them to move just slightly. If this were to occur a tremendous amount of volatility is created in that area, since there is a change from infinity at the pole to zero at the zero in a very small range of signals. This is generally a very bad way to try to eliminate a pole. A much better way is to use control theory to move the pole to a better place.
It is possible to have more than one pole or zero at any given point. For instance, the discrete-time transfer function H(z) = z 2 will have two zeros at the origin and the continuous-time function will have 25 poles at the origin.
MATLAB - If access to MATLAB is readily available, then you can use its functions to easily create pole/zero plots. Below is a short program that plots the poles and zeros from the above example onto the Z-Plane.
% Set up vector for zeros z = [j ; -j]; % Set up vector for poles p = [-1 ; .5+.5j ; .5-.5j]; figure(1); zplane(z,p); title('Pole/Zero Plot for Complex Pole/Zero Plot Example');
Figure 11.12.
Now that we have found and plotted the poles and zeros, we must ask what it is that this plot gives us. Basically what we can gather from this is that the magnitude of the transfer function will be larger when it is closer to the poles and smaller when it is closer to the zeros. This provides us with a qualitative understanding of what the system does at various frequencies and is crucial to the discussion of stability.
The region of convergence (ROC) for X(z) in the complex Z-plane can be determined from the pole/zero plot. Although several regions of convergence may be possible, where each one corresponds to a different impulse response, there are some choices that are more practical. A ROC can be chosen to make the transfer function causal and/or stable depending on the pole/zero plot.
Filter Properties from ROC
If the ROC extends outward from the outermost pole, then the system is causal.
If the ROC includes the unit circle, then the system is stable.
Below is a pole/zero plot with a possible ROC of the Z-transform in the Simple Pole/Zero Plot discussed earlier. The shaded region indicates the ROC chosen for the filter. From this figure, we can see that the filter will be both causal and stable since the above listed conditions are both met.
Example 11.9.
Figure 11.13. Region of Convergence for the Pole/Zero Plot
The reason it is helpful to understand and create these pole/zero plots is due to their ability to help us easily design a filter. Based on the location of the poles and zeros, the magnitude response of the filter can be quickly understood. Also, by starting with the pole/zero plot, one can design a filter and obtain its transfer function very easily.
Pole-Zero Plots are clearly quite useful in the study of the Laplace and Z transform, affording us a method of visualizing the at times confusing mathematical functions.
With the Laplace transform, the s-plane represents a set of signals (complex exponentials). For any given LTI system, some of these signals may cause the output of the system to converge, while others cause the output to diverge ("blow up"). The set of signals that cause the system's output to converge lie in the region of convergence (ROC). This module will discuss how to find this region of convergence for any continuous-time, LTI system.
The region of convergence, known as the ROC, is important to understand because it defines the region where the Laplace transform exists. The Laplace transform of a sequence is defined as The ROC for a given h(t) , is defined as the range of t for which the Laplace transform converges. If we consider a causal, complex exponential, h(t) = ⅇ – (at) u(t) , we get the equation, Evaluating this, we get Notice that this equation will tend to infinity when tends to infinity. To understand when this happens, we take one more step by using s = σ + ⅈω to realize this equation as Recognizing that ⅇ – (ⅈωt) is sinusoidal, it becomes apparent that ⅇ – (σ(a)t) is going to determine whether this blows up or not. What we find is that if σ + a is positive, the exponential will be to a negative power, which will cause it to go to zero as t tends to infinity. On the other hand, if σ + a is negative or zero, the exponential will not be to a negative power, which will prevent it from tending to zero and the system will not converge. What all of this tells us is that for a causal signal, we have convergence when
Alternatively, we can note that since the Laplace transform is a power series, it converges when h(t)ⅇ – (st) is absolutely summable. Therefore, must be satisfied for convergence.
Although we will not go through the process again for anticausal signals, we could. In doing so, we would find that the necessary condition for convergence is when
The Region of Convergence has a number of properties that are dependent on the characteristics of the signal, h(t) .
The ROC cannot contain any poles. By definition a pole is a where H(s) is infinite. Since H(s) must be finite for all s for convergence, there cannot be a pole in the ROC.
If h(t) is a finite-duration sequence, then the ROC is the entire s-plane, except possibly s = 0 or |s| = ∞ . A finite-duration sequence is a sequence that is nonzero in a finite interval t1 ≤ t ≤ t2 . As long as each value of h(t) is finite then the sequence will be absolutely summable. When t 2 > 0 there will be a s -1 term and thus the ROC will not include s = 0 . When t 1 < 0 then the sum will be infinite and thus the ROC will not include |s| = ∞ . On the other hand, when t 2 ≤ 0 then the ROC will include s = 0 , and when t 1 ≥ 0 the ROC will include |s| = ∞ . With these constraints, the only signal, then, whose ROC is the entire z-plane is h(t) = c δ(t) .
Figure 11.14.
The next properties apply to infinite duration sequences. As noted above, the z-transform converges when |H(s)| < ∞ . So we can write We can then split the infinite sum into positive-time and negative-time portions. So |H(s)| ≤ N(s) + P(s) where and In order for |H(s)| to be finite, |h(t)| must be bounded. Let us then set |h(t)| ≤ C 1 r 1 t for t < 0 and |h(t)| ≤ C 2 r 2 t for t ≥ 0 From this some further properties can be derived:
If h(t) is a right-sided sequence, then the ROC extends outward from the outermost pole in H(s) . A right-sided sequence is a sequence where h(t) = 0 for t < t1 < ∞ . Looking at the positive-time portion from the above derivation, it follows that Thus in order for this integral to converge, |ⅇ – s | > r 2 , and therefore the ROC of a right-sided sequence is of the form |ⅇ – s | > r 2 .
Figure 11.15.
Figure 11.16.
If h(t) is a left-sided sequence, then the ROC extends inward from the innermost pole in H(s) . A left-sided sequence is a sequence where h(t) = 0 for t > t1 > – ∞ . Looking at the negative-time portion from the above derivation, it follows that Thus in order for this integral to converge, |ⅇ – s | < r 1 , and therefore the ROC of a left-sided sequence is of the form |ⅇ – s | < r 1 .
Figure 11.17.
Figure 11.18.
If h(t) is a two-sided sequence, the ROC will be a ring in the z-plane that is bounded on the interior and exterior by a pole. A two-sided sequence is an sequence with infinite duration in the positive and negative directions. From the derivation of the above two properties, it follows that if r 2 < |ⅇ – s | < r 2 converges, then both the positive-time and negative-time portions converge and thus H(s) converges as well. Therefore the ROC of a two-sided sequence is of the form -r2 < |ⅇ – s | < r 2 .
Figure 11.19.
Figure 11.20.
To gain further insight it is good to look at a couple of examples.
Example 11.10.
Lets take The Laplace-transform of is with an ROC at .
Figure 11.21.
The z-transform of is with an ROC at .
Figure 11.22.
Due to linearity, By observation it is clear that there are two zeros, at 0 and , and two poles, at , and . Following the above properties, the ROC is .
Figure 11.23.
Example 11.11.
Now take The z-transform and ROC of was shown in the example above. The Laplace-transorm of is with an ROC at .
Figure 11.24.
Once again, by linearity, By observation it is again clear that there are two zeros, at 0 and , and two poles, at , and . in ths case though, the ROC is .
Figure 11.25.
Using the demonstration, learn about the region of convergence for the Laplace Transform.
Clearly, in order to craft a system that is actually useful by virtue of being causal and BIBO stable, we must ensure that it is within the Region of Convergence, which can be ascertained by looking at the pole zero plot. The Region of Convergence is the area in the pole/zero plot of the transfer function in which the function exists. For purposes of useful filter design, we prefer to work with rational functions, which can be described by two polynomials, one each for determining the poles and the zeros, respectively.
When dealing with operations on polynomials, the term rational function is a simple way to describe a particular relationship between two polynomials.
For any two polynomials, A and B, their quotient is called a rational function.
Example .
Below is a simple example of a basic rational function, f(x) . Note that the numerator and denominator can be polynomials of any order, but the rational function is undefined when the denominator equals zero.
In order to see what makes rational functions special, let us look at some of their basic properties and characteristics. If you are familiar with rational functions and basic algebraic properties, skip to the next section to see how rational functions are useful when dealing with the Laplace transform.
To understand many of the following characteristics of a rational function, one must begin by finding the roots of the rational function. In order to do this, let us factor both of the polynomials so that the roots can be easily determined. Like all polynomials, the roots will provide us with information on many key properties. The function below shows the results of factoring the above rational function, Equation.
Thus, the roots of the rational function are as follows:
Roots of the numerator are: {-2, 2}
Roots of the denominator are: {-3, 1}
In order to understand rational functions, it is essential to know and understand the roots that make up the rational function.
Because we are dealing with division of two polynomials, we must be aware of the values of the variable that will cause the denominator of our fraction to be zero. When this happens, the rational function becomes undefined, i.e. we have a discontinuity in the function. Because we have already solved for our roots, it is very easy to see when this occurs. When the variable in the denominator equals any of the roots of the denominator, the function becomes undefined.
Example 11.13.
Continuing to look at our rational function above, Equation, we can see that the function will have discontinuities at the following points:
In respect to the Cartesian plane, we say that the discontinuities are the values along the x-axis where the function is undefined. These discontinuities often appear as vertical asymptotes on the graph to represent the values where the function is undefined.
Using the roots that we found above, the domain of the rational function can be easily defined.
The group, or set, of values that are defined by a given function.
Example .
Using the rational function above, Equation, the domain can be defined as any real number x where x does not equal 1 or negative 3. Written out mathematically, we get the following:
The x-intercept is defined as the point(s) where f(x) , i.e. the output of the rational functions, equals zero. Because we have already found the roots of the equation this process is very simple. From algebra, we know that the output will be zero whenever the numerator of the rational function is equal to zero. Therefore, the function will have an x-intercept wherever x equals one of the roots of the numerator.
The y-intercept occurs whenever x equals zero. This can be found by setting all the values of x equal to zero and solving the rational function.
Rational functions often result when the Laplace transform is used to compute transfer functions for LTI systems. When using the Laplace transform to solve linear constant coefficient ordinary differential equations, partial fraction expansions of rational functions prove particularly useful. The roots of the polynomials in the numerator and denominator of the transfer function play an important role in describing system behavior. The roots of the polynomial in the numerator produce zeros of the transfer function where the system produces no output for an input of that complex frequency. The roots of the polynomial in the denominator produce poles of the transfer function where the system has natural frequencies of oscillation.
Once we have used our knowledge of rational functions to find its roots, we can manipulate a Laplace transform in a number of useful ways. We can apply this knowledge by representing an LTI system graphically through a pole-zero plot for analysis or design.
It is often useful to describe systems using equations involving the rate of change in some quantity through differential equations. Recall that one important subclass of differential equations, linear constant coefficient ordinary differential equations, takes the form
where A is a differential operator of the form
The differential equation in Equation 11.9 would describe some system modeled by A with an input forcing function x(t) that produces an output solution signal y(t). However, the unilateral Laplace transform permits a solution for initial value problems to be found in what is usually a much simpler method. Specifically, it greatly simplifies the procedure for nonhomogeneous differential equations.
As stated briefly in the definition above, a differential equation is a very useful tool in describing and calculating the change in an output of a system described by the formula for a given input. The key property of the differential equation is its ability to help easily find the transform, H(s) , of a system. In the following two subsections, we will look at the general form of the differential equation and the general conversion to a Laplace-transform directly from the differential equation.
Using the definition, ???, we can easily generalize the transfer function, H(s) , for any differential equation. Below are the steps taken to convert any differential equation into its transfer function, i.e. Laplace-transform. The first step involves taking the Fourier Transform of all the terms in ???. Then we use the linearity property to pull the transform inside the summation and the time-shifting property of the Laplace-transform to change the time-shifting terms to exponentials. Once this is done, we arrive at the following equation: a 0 = 1 .
Once the Laplace-transform has been calculated from the differential equation, we can go one step further to define the frequency response of the system, or filter, that is being represented by the differential equation.
Remember that the reason we are dealing with these formulas is to be able to aid us in filter design. A LCCDE is one of the easiest ways to represent FIR filters. By being able to find the frequency response, we will be able to look at the basic properties of any filter represented by a simple LCCDE.
Below is the general formula for the frequency response of a Laplace-transform. The conversion is simply a matter of taking the Laplace-transform formula, H(s) , and replacing every instance of s with ⅇ ⅈw . Once you understand the derivation of this formula, look at the module concerning Filter Design from the Laplace-Transform for a look into how all of these ideas of the Laplace-transform, Differential Equation, and Pole/Zero Plots play a role in filter design.
In order for a linear constant-coefficient difference equation to be useful in analyzing a LTI system, we must be able to find the systems output based upon a known input, x(t) , and a set of initial conditions. Two common methods exist for solving a LCCDE: the direct method and the indirect method, the latter being based on the Laplace-transform. Below we will briefly discuss the formulas for solving a LCCDE using each of these methods.
The final solution to the output based on the direct method is the sum of two parts, expressed in the following equation: y(t) = y h (t) + y p (t) The first part, y h (t) , is referred to as the homogeneous solution and the second part, y h (t) , is referred to as particular solution. The following method is very similar to that used to solve many differential equations, so if you have taken a differential calculus course or used differential equations before then this should seem very familiar.
We begin by assuming that the input is zero, x(t) = 0 . Now we simply need to solve the homogeneous differential equation: In order to solve this, we will make the assumption that the solution is in the form of an exponential. We will use lambda, λ , to represent our exponential terms. We now have to solve the following equation: We can expand this equation out and factor out all of the lambda terms. This will give us a large polynomial in parenthesis, which is referred to as the characteristic polynomial. The roots of this polynomial will be the key to solving the homogeneous equation. If there are all distinct roots, then the general solution to the equation will be as follows: y h (t) = C 1(λ 1) t + C 2(λ 2) t + … + C N (λ N ) t However, if the characteristic equation contains multiple roots then the above general solution will be slightly different. Below we have the modified version for an equation where λ 1 has K multiple roots: y h (t) = C 1(λ 1) t + C 1 t(λ 1) t + C 1 t 2(λ 1) t + … + C 1 t K − 1(λ 1) t + C 2(λ 2) t + … + C N (λ N ) t
The particular solution, y p (t) , will be any solution that will solve the general differential equation: In order to solve, our guess for the solution to y p (t) will take on the form of the input, x(t) . After guessing at a solution to the above equation involving the particular solution, one only needs to plug the solution into the differential equation and solve it out.
The indirect method utilizes the relationship between the differential equation and the Laplace-transform, discussed earlier, to find a solution. The basic idea is to convert the differential equation into a Laplace-transform, as described above, to get the resulting output, Y(s) . Then by inverse transforming this and using partial-fraction expansion, we can arrive at the solution.
This can be interatively extended to an arbitrary order derivative as in Equation Equation 11.12.
Now, the Laplace transform of each side of the differential equation can be taken
which by linearity results in
and by differentiation properties in
Rearranging terms to isolate the Laplace transform of the output,
Thus, it is found that
In order to find the output, it only remains to find the Laplace transform X(s) of the input, substitute the initial conditions, and compute the inverse Laplace transform of the result. Partial fraction expansions are often required for this last step. This may sound daunting while looking at Equation Equation 11.17, but it is often easy in practice, especially for low order differential equations. Equation Equation 11.17 can also be used to determine the transfer function and frequency response.
As an example, consider the differential equation
with the initial conditions y ' (0) = 1 and y(0) = 0 Using the method described above, the Laplace transform of the solution y(t) is given by
Performing a partial fraction decomposition, this also equals
Computing the inverse Laplace transform,
One can check that this satisfies that this satisfies both the differential equation and the initial conditions.
One of the most important concepts of DSP is to be able to properly represent the input/output relationship to a given LTI system. A linear constant-coefficient difference equation (LCCDE) serves as a way to express just this relationship in a discrete-time system. Writing the sequence of inputs and outputs, which represent the characteristics of the LTI system, as a difference equation helps in understanding and manipulating a system.
Analog (Continuous-Time) filters are useful for a wide variety of applications, and are especially useful in that they are very simple to build using standard, passive R,L,C components. Having a grounding in basic filter design theory can assist one in solving a wide variety of signal processing problems.
One of the motivating factors for analyzing the pole/zero plots is due to their relationship to the frequency response of the system. Based on the position of the poles and zeros, one can quickly determine the frequency response. This is a result of the correspondence between the frequency response and the transfer function evaluated on the unit circle in the pole/zero plots. The frequency response, or DTFT, of the system is defined as: Next, by factoring the transfer function into poles and zeros and multiplying the numerator and denominator by ⅇ ⅈw we arrive at the following equations: From Equation we have the frequency response in a form that can be used to interpret physical characteristics about the filter's frequency response. The numerator and denominator contain a product of terms of the form |ⅇ ⅈw − h|, where h is either a zero, denoted by c k or a pole, denoted by d k . Vectors are commonly used to represent the term and its parts on the complex plane. The pole or zero, h , is a vector from the origin to its location anywhere on the complex plane and ⅇ ⅈw is a vector from the origin to its location on the unit circle. The vector connecting these two points, |ⅇ ⅈw − h|, connects the pole or zero location to a place on the unit circle dependent on the value of w . From this, we can begin to understand how the magnitude of the frequency response is a ratio of the distances to the poles and zero present in the z-plane as w goes from zero to pi. These characteristics allow us to interpret |H(w)| as follows: In conclusion, using the distances from the unit circle to the poles and zeros, we can plot the frequency response of the system. As w goes from 0 to 2π , the following two properties, taken from the above equations, specify how one should draw |H(w)|.
While moving around the unit circle...
if close to a zero, then the magnitude is small. If a zero is on the unit circle, then the frequency response is zero at that point.
if close to a pole, then the magnitude is large. If a pole is on the unit circle, then the frequency response goes to infinity at that point.
Let us now look at several examples of determining the magnitude of the frequency response from the pole/zero plot of a z-transform. If you have forgotten or are unfamiliar with pole/zero plots, please refer back to the Pole/Zero Plots module.
Example 11.15.
In this first example we will take a look at the very simple z-transform shown below: H(z) = z + 1 = 1 + z -1 H(w) = 1 + ⅇ – (ⅈw) For this example, some of the vectors represented by |ⅇ ⅈw − h|, for random values of w , are explicitly drawn onto the complex plane shown in the figure below. These vectors show how the amplitude of the frequency response changes as w goes from 0 to 2π , and also show the physical meaning of the terms in Equation above. One can see that when w = 0, the vector is the longest and thus the frequency response will have its largest amplitude here. As w approaches π , the length of the vectors decrease as does the amplitude of |H(w)|. Since there are no poles in the transform, there is only this one vector term rather than a ratio as seen in Equation.
Figure 11.26. Pole/Zero Plot
Example 11.16.
For this example, a more complex transfer function is analyzed in order to represent the system's frequency response.
Below we can see the two figures described by the above equations. The Figure 11.27 represents the basic pole/zero plot of the z-transform, H(w). Figure 11.27 shows the magnitude of the frequency response. From the formulas and statements in the previous section, we can see that when w = 0 the frequency will peak since it is at this value of w that the pole is closest to the unit circle. The ratio from Equation helps us see the mathematics behind this conclusion and the relationship between the distances from the unit circle and the poles and zeros. As w moves from 0 to π , we see how the zero begins to mask the effects of the pole and thus force the frequency response closer to 0.
Figure 11.27. Pole/Zero Plot
The Butterworth filter is the simplest filter. It can be constructed out of passive R, L, C circuits. The magnitude of the transfer function for this filter is
where n is the order of the filter and ω c is the cutoff frequency. The cutoff frequency is the frequency where the magnitude experiences a 3 dB dropoff (where ).
Figure 11.28.
The important aspects of Figure 11.28 are that it does not ripple in the passband or stopband as other filters tend to, and that the larger n , the sharper the cutoff (the smaller the transition band).
Butterworth filters give transfer functions ( H(ⅈω) and H(s) ) that are rational functions. They also have only poles, resulting in a transfer function of the form and a pole-zero plot of
Figure 11.29.
Note that the poles lie along a circle in the s-plane.
The Butterworth filter does not give a sufficiently good approximation across the complete passband in many cases. The Taylor's series approximation is often not suited to the way specifications are given for filters. An alternate error measure is the maximum of the absolute value of the difference between the actual filter response and the ideal. This is considered over the total passband. This is the Chebyshev error measure and was defined and applied to the FIR filter design problem. For the IIR filter, the Chebyshev error is minimized over the passband and a Taylor's series approximation at ω = ∞ is used to determine the stopband performance. This mixture of methods in the IIR case is called the Chebyshev filter, and simple design formulas result, just as for the Butterworth filter.
The design of Chebyshev filters is particularly interesting, because the results of a very elegant theory insure that constructing a frequency-response function with the proper form of equal ripple in the error will result in a minimum Chebyshev error without explicitly minimizing anything. This allows a straightforward set of design formulas to be derived which can be viewed as a generalization of the Butterworth formulas ???, ???.
The form for the magnitude squared of the frequency-response function for the Chebyshev filter is
where C N (ω) is an Nth-order Chebyshev polynomial and ϵ is a parameter that controls the ripple size. This polynomial in ω has very special characteristics that result in the optimality of the response function (Equation 11.23).
Figure 11.30.
Insert bessel filter information
There is yet another method that has been developed that uses a Chebyshev error criterion in both the passband and the stopband. This is the fourth possible combination of Chebyshev and Taylor's series approximations in the passband and stopband. The resulting filter is called an elliptic-function filter, because elliptic functions are normally used to calculate the pole and zero locations. It is also sometimes called a Cauer filter or a rational Chebyshev filter, and it has equal ripple approximation error in both pass and stopbands ???, ???, ???, ???.
The error criteria of the elliptic-function filter are particularly well suited to the way specifications for filters are often given. For that reason, use of the elliptic-function filter design usually gives the lowest order filter of the four classical filter design methods for a given set of specifications. Unfortunately, the design of this filter is the most complicated of the four. However, because of the efficiency of this class of filters, it is worthwhile gaining some understanding of the mathematics behind the design procedure.
This section sketches an outline of the theory of elliptic- function filter design. The details and properties of the elliptic functions themselves should simply be accepted, and attention put on understanding the overall picture. A more complete development is available in ???, ???.
Because both the passband and stopband approximations are over the entire bands, a transition band between the two must be defined. Using a normalized passband edge, the bands are defined by
This is illustrated in Figure .
Figure 11.31.
The characteristics of the elliptic function filter are best described in terms of the four parameters that specify the frequency response:
The maximum variation or ripple in the passband δ 1 ,
The width of the transition band ,
The maximum response or ripple in the stopband δ 2 , and
The order of the filter N .
The result of the design is that for any three of the parameters given, the fourth is minimum. This is a very flexible and powerful description of a filter frequency response.
The form of the frequency-response function is a generalization of that for the Chebyshev filter
where
with F(s) being the prototype analog filter transfer function similar to that for the Chebyshev filter. G(ω) is a rational function that approximates zero in the passband and infinity in the stopband. The definition of this function is a generalization of the definition of the Chebyshev polynomial.
As can be seen, there is a large amount of information available in filter design, more than an introductory module can cover. Even for designing Discrete-time IIR filters, it is important to remember that there is a far larger body of literature for design methods for the analog signal processing world than there is for the digital. Therefore, it is often easier and more practical to implement an IIR filter using standard analog methods, and then discretize it using methods such as the Bilateral Transform.