As shown in Chapter 9, reflection methods can yield what is considered to be a focused image provided there is normal incidence (Sect. 9.1.3). For the common GPR measurement geometry, there will be normal incidence when the reflecting surfaces are planar such as a horizontal (Fig. 9.11) or a gently sloping interface (Fig. 9.12). It must be reiterated that this is only true for planar interfaces. There can also be normal incidence for curved surfaces, such as a circular reflecting object. However, this target is manifested in the GPR data as a hyperbola (Fig. 9.16) and, unlike a planar interface, this object's shape is not replicated in the radargram. Objects of this type are deemed to be out-of-focus. In geophysical measurements, focus is taken to mean that the shape of a buried object is somehow 'recovered' either directly m the acquired data or in its subsequent manipulation.
Thus far, the only wave-based geophysical technique to be considered is GPR and this employs a reflection measurement geometry where information about the existence, location, and size or shape of a buried object comes from the reflection of waves back to receivers on the ground surface. Transmission measurements are different from reflection measurements in that waves pass through objects, rather than reflect off of them, and it will first be shown in this chapter that transmission measurements can be used to obtain focused images of isolated inclusions. Subsequently, this procedure will be extended for the consideration of reflection-based measurements such as GPR.
Specifically, the method of analysis presented here for transmission measurements is known as tomographic imaging. The concept of tomographic imaging is illustrated in the simple children's experiment shown in Fig. 11.1. Here, an area is gridded (Sect. 2.9.2) and the objective of the experiment is to 'image' a puddle of water located somewhere within the gridded region. This is accomplished through a multi-step process and, in the first step, a row of children walk in parallel lines across the grid such that each child traverses a column of grid cells (Fig. 11.1, upper left). If a child arrives at the far side of the grid with wet feet, the entire column of grid cells is identified as possibly being wet. This is denoted by shading the entire column of grid cells gray. Repeating this procedure for the entire row of children produces a gray strip across the grid (Fig. 11.1, upper right). All that is known from the information given (whose feet are wet) is that the puddle must be somewhere in this gray strip and this shaded region is called a partial image. In this simple experiment the puddle can be seen. However, this is an analogy for geophysical imaging where the composition of the underground is not visible and any information about its character must be based on knowledge of where a source is located and the nature of the received signal. To make this a true analogy, it must be assumed that the only information that can be used is that the children walk in straight parallel lines (an analogy for plane wave synthesis, Sect. 6.4) and whether or not their feet are wet (an analogy for the received signal).The entire procedure is repeated for a column of children that traverse the grid in a direction perpendicular to that of the first row of children. This yields a second partial image, a gray strip perpendicular to the first partial image (Fig. 11.1, center left). A complete image is obtained by identifying the cells that are gray in both partial images and shading them black. It is observed that this image is a black rectangular region that bounds the puddle (Fig. 11.1, lower right). Each partial image is associated with a walking direction and this direction is referred to as a view. The process can be extended to many more views than the two used in Fig. 11.1 and it can be expected that, with more views, the imaged shape of the puddle becomes closer to its actual shape.
The imaging procedure illustrated in Fig. 11.1 is unnecessary since, as noted previously, the children can see the puddle. This is not the situation in geophysical measurements where the waves that pass through an object are not 'seen', but instead are interpreted from how they 'appear' at some measurement location. In the subsequent sections, the imaging concept introduced above will be expanded to establish a class of analysis procedures known as geotomography.
In the experiment depicted in Fig. 11.1, grid cells were shaded if a child's feet were wet. For each child, an entire row or column of grid cells was shaded if the child walking through those cells had wet feet. The information used here was the status of the children's feet—either wet or dry—and the process of assigning this information to the entire row or column of grid cells is called backprojectim. The path of each child was a straight line that can be analogous to the ray path of a wave and, therefore, this method of imaging is referred to as straight ray backprojection.
Recalling that a plane wave is characterized by having all rays parallel (Fig. 6.19a), it is clear that all children walking parallel in Fig. 11.1 is equivalent to plane wave illumination. Plane waves are not needed for tomographic imaging and backprejection can be implemented with point sources (recall that a point source emits rays in all directions, Fig. 6.19b). This is illustrated by the experiment shown in Fig. 11.2. Here, there is some irregularly shaped object located within a study region. This object is taken to be opaque and illuminated by a point source of light. The presence of the opaque target causes a shadow to be cast on some form of light detectors located on the opposite side of the target from the light source. In this experiment, the array of detectors can be as simple as a strip of white paper. Lines can be drawn from the edges of the shadow back to the source to define a triangle. Shading this triangle establishes a partial image as shown in the upper left of Fig. 11.2 and it is clear that the target must be somewhere within this triangle. Alone, this partial image does little to characterize the target. The entire instrument—the light source and detector array—can be rotated 90° about the target and the partial imaging procedure repeated. The upper right illustration in Fig. 11.2 shows the first and second partial images (gray) as well as the image reconstructed from these two partial images as the black area where the two partial images overlap. The lower right and left of Fig. 11.2 show two more partial images with the darker shaded areas depicting the reconstructed image associated with the addition of each partial image. The fidelity of the image improves with the addition of more partial images from different views. The illustration at the bottom of Fig. 11.2 shows the image reconstructed from all four views with the actual object shape superimposed.
There are several important differences between the tomographic imaging introduced thus far and that which must be used in geotomography. It is obvious that light cannot be used to probe the Earth's subsurface. It is known from the previous consideration of GPR (Chapter 9) that electromagnetic waves of appropriately low frequency can be used since these waves will propagate some distance through the underground. Low frequency sound waves, frequently referred to in geophysics as seismic waves, can also be employed in geotomography. Although sound waves have not been discussed in great detail, they behave much like radar waves in that both propagate at a characteristic wave speed (the speed of sound is much slower than the speed of light), energy from both types of waves will be attenuated in geologic material, and both will undergo reflections and refractions. The major differences between sound and radar waves are that sound waves are longitudinal while radar waves are transverse (Sect. 6.2.1), and the instrumentation used for each will be somewhat different. The concepts for tomographic imaging considered here apply to both seismic- and radar-based measurements. However, for either of these wave types, the concept of shadows must change. In addition, the rotating measurement geometry shown in Fig. 11.2 cannot be used since it would require excavation of the study region to make such measurements and, after excavation, there would be no need to image. The revised concept of shadows will be considered in Sect. 11.3 and here practical measurement geometries in geotomography will be examined.
Since tomographic imaging based on transmission measurements requires waves to propagate through an object to be imaged, a transmission measurement geometry must differ from that used in GPR since a wave source and receiver can be co-located or side-by-side. There are two measurement geometries employed in geotomography. The first is known as cross-borehole (the left illustration in Fig. 11.3) where receivers are deployed in one borehole and sources are deployed in a parallel borehole. The second geometry, surface-to-borehole, employs receivers in a borehole and sources deployed along a line on the ground surface.
In addition to the above-cited differences between acoustic (sound) and radar waves, there is also a significant difference between the implementation of these two types of measurements. GPR typically employs a single source and a single receiver while acoustic measurements are typically array-based, in other words, many receivers are used. This difference is driven by equipment costs. Acoustic receivers are far less expensive than GPR antennas so that many acoustic receivers can be purchased for the cost of a single GPR antenna. Yet, this is not the dominant cost differential. Since the operating frequency of GPR is about 100 MHz and the operating frequency of seismic waves is to the order of several hundred Hertz, the period of radar waves is approximately one million times shorter than the period of acoustic waves used in geophysics. Since proper temporal sampling requires that measurements be made at time intervals less than one-half of a period (Sect. 9.6.2), the electronics for a GPR system must operate about one million times faster than comparable electronics for an acoustic system. The cost of electronic components is proportional to their speed so the electronics sufficient to acquire data from many acoustic receivers cost about the same as the electronics necessary to acquire data from a single GPR receiving antenna. For acoustic-based measurements, the typical measurement procedure is to place an array of receivers down a borehole and move a single source along a line on the ground surface (surface-to-borehole) or vertically in a borehole parallel to that containing the receiver array (cross-borehole). Because of the data acquisition constraints noted above, radar tomography employs only a single transmitting and receiving antenna. Data is acquired by fixing the position of the transmitting antenna (either on the ground surface or down a borehole) and the receiving antenna is moved vertically in a borehole. The transmitter is repositioned and the process is repeated for a different view. While radar is more time-efficient for reflection studies (Chapter 9), it is less time-efficient than acoustic methods in geotomography because transmitter and receiver must be moved individually.
Practical differences exist between surface-to-borehole and cross-borehole tomography, one of which is the ease of data acquisition. For surface-to-borehole tomography, many vertical cross-sections can be imaged from a single borehole. This is accomplished by defining many source lines on the ground surface as spokes radiating outward from the borehole. A tomographic image can be reconstructed for the vertical cross-section below each source line. For cross-borehole tomography, at least one new borehole must be developed for each additional imaged cross-section. There is also a difference between cross-borehole and surface-to-borehole tomography in the image resolution that they offer. In order to understand how this difference arises, consider the single partial image shown in the upper left of Fig. 11.2. Note that this triangular partial image provides a reasonable bound on the vertical size and vertical position of the target, whereas its horizontal size and position are completely unresolved. The target can, in fact, be anywhere between the source and the receiver array. This illustrates the fundamental limitation of backprejection imaging which is that, for any single view, the resolution is far better in the direction perpendicular to the ray directions than along the direction that rays travel. Thus, to achieve a horizontal resolution comparable to the vertical resolution (the upper left in Fig. 11.2), it is necessary to rotate the measurements by 90° (the upper right in Fig. 11.2).
Recognizing the above-cited limitation, consider an image from two views for the cross-borehole and surface-to-borehole geometries shown in the left and right of Fig. 11.4, respectively.
Note that, for the cross-borehole geometry, the image is elongated horizontally while, for the surface-to-borehole geometry, it is more elongated and this elongation is diagonal rather than horizontal. The difference in direction of elongation results from the fact that elongation occurs along the ray direction. For the cross-borehole geometry, the primary ray direction is horizontal leading to horizontal elongation. For the surface-to-borehole geometry, the primary ray direction is diagonal (from the surface to the borehole) and hence yields a diagonal elongation. Elongation of the image is generally more severe for the surface-to-borehole geometry because the direction of the views cannot be varied as much within this geometry. The best possible images can be obtained when targets can be viewed from all directions as shown in Fig. 11.2. As shown in Fig. 11.4, a greater range of view directions can be realized in the cross-borehole geometry and, consequently, this geometry will, in general, provide images that are not as elongated as those from the surface-to-borehole geometry.
In Sect. 11.1, it was graphically demonstrated how backprojection can be used to reconstruct images from shadows. In the discussion of the imaging sequence depicted in Fig. 11.2, there were what might be considered conventional shadows, in that these were characterized by a total absence of light as a result of the assumption of an opaque target. Although radar and acoustic waves employed ingeotomography behave like light, it is rare that buried objects are totally opaque to either type of wave. It is known from Sect. 9.1.2 that for an object to be opaque it must have a coefficient of reflection that is nearly one, so that almost all of the wave energy is reflected and very little is transmitted. For this to occur, the difference in wave speed between the target and its surroundings must be huge. Because visible light is not used in geotomography, shadows are not black, white, or any other color. All waves, including radar and acoustic waves, have amplitudes (Sect. 6.1), and these amplitudes are quite similar to the intensity of light so that a wave with a large measured amplitude can be thought of as white, no amplitude as black and intermediate amplitudes as shades of gray. Because buried objects are rarely opaque, the shadows are rarely black and, most commonly, these shadows assume shades of gray.
While it is possible to reconstruct geotomographic images from amplitude shadows, there is a more useful procedure that can yield images of the wave speed of objects. This procedure again uses backprojection but is based on time shadows. To understand time shadows, first consider the case of cross-borehole measurements in an area free of any inhomogeneities. As shown in Fig. 11.5, rays can be drawn from the source, through a material having a wave speed Co, to each receiver. The waves travel along each ray at the same speed but, since the distance traveled is not the same for all rays, pulses do not arrive at the same time at all receivers. The radargram for this data acquisition experiment is shown on the right of Fig. 11.5 and it is apparent that the pattern of received signals is a hyperbola. The radargram shown in Fig. 11.5 has its time axis horizontal while the radargrams shown in Chapter 9 all have their time axes vertical. The orientation of a radargram is arbitrary and vertical time axes were used for GPR data because, in this measurement geometry, increasing time is associated with the downward propagation of radar waves. For the cross-borehole geometry shown in Fig. 11.5, the dominant direction of propagation is horizontal where increasing travel time is associated with increasing horizontal travel distance. The term radargram is used to denote a sequence of traces (Sect. 9.2.2) for different antenna positions. When acoustic (seismic) waves are used, the resulting display of acquired data is known as a seismogram. Acoustic waves suffer the same loss of high frequency components as radar waves when propagating through geologic material. However, this attenuation is not a result of the conversion of wave energy to induced currents but rather frictional losses. This frequency-dependent attenuation will limit the bandwidth (Sect. 9.4.2) so that real seismograms and radargrams appear remarkably similar. For the remainder of the presentation of geotomography, the term radargram will be used for the graphical display of acquired data. It should be understood that either radar or acoustic waves can be used for geotomography and, if acoustic waves are used, the displayed data should properly be referred to as a seismogram.
The experiment depicted in Fig. 11.5 can be repeated with a circular inclusion having a wave speed embedded in the constant Co wave speed background material (the upper right illustration in Fig. 11.6). In this example, c1 is taken to be greater than c0; however, this is not required for backprojection to work. The upper right illustration in Fig. 11.6 shows a number of ray paths from a point source to receivers in the array.
The rays passing through the circular target arrive earlier than they would in the absence of the target (Fig. 11.5) since a portion of the ray path passes through the higher wave speed region defined by the circle. The resulting radargram is shown in the lower left of Fig. 11.6 and it is obvious that this radargram is slightly different than that for a homogeneous material (Fig. 11.5). If the background wave speed Co is known, the arrival times of all rays, in the absence of any inhomogeneities, can be predicted from the known ray path lengths using the relationship
The lower right illustration in Fig. 11.6 shows the result of subtracting the predicted arrival time for an inclusion-free medium from the radargram obtained with the circular inclusion present. This is a radargram of the perturbed travel time and it clearly depicts a time shadow of the circle. In the absence of this feature, or any other feature having a wave speed different than its surroundings, there would be no time shadow.
The backprojection procedure described in Sect, 11.1 can be applied to time shadows such as the one shown on the lower right of Fig. 11.6. The direct use of time shadows is unnecessary, however, and here a detailed procedure will be developed for reconstructing images of wave speed from total travel time radargrams (lower left, Fig. 11.6).
The first step in this procedure is to convert from travel time to ray-averaged wave speed. This is quite simple since the path length (the distance from the source to any receiver) is known and the travel time is measured. For example, the procedure for computing the average speed of an automobile trip between two cities is known. The distance between the two cities is recorded on the odometer and the time required to complete the trip can easily be measured. The trip-averaged speed is simply
In a similar manner, the average speed for a ray arriving at a particular receiver location can be computed by
Figure 11.7 shows the radargram from a cross-borehole measurement with a circular inclusion (left) and the computed ray-averaged wave speed as a function of receiver location (center). This process has created a wave speed shadow rather than a time shadow. The ray-averaged wave speed is displayed as a line plot which has a constant value 0.1 m/ns at the top and bottom consistent with rays passing through regions of constant wave speed. In the center of the line plot, the ray-averaged wave speed has increased to 0.11 m/ns as a result of rays passing through the higher wave speed circle. Since the ray paths are known (assumed to be straight), a ray-averaged wave speed can be assigned to each ray. This is shown on the right side of Fig. 11.3 where it has been assumed that the background wave speed, c0, is 0.1 m/ns and that the 5 m radius circular target has a wave speed, ri, of 0.2 m/ns.
Tomographic imaging, as presented here, is based on travel times where the shadows associated with varying receiver location can assume many values. In contrast to this situation, the simplified presentation of tomographic imaging based on light (Fig. 11.4) assumes shadows that can have only two values, black (shadow) or white (light). Conceptually, the reconstruction of tomographic images from light and ray-averaged wave speed is identical. However, the subtle differences make the actual numerical computation of a tomographic image somewhat more complicated. Following is the sequence of steps necessary to create a cross-borehole (Fig. 11.4, left) image from GPR measurements. These steps are identical for the surface-to-borehole geometry (Fig. 11.4, right) or for using seismic rather than electromagnetic energy.
The fidelity of shape replication and the accuracy of the computed wave speed will depend on the number of source locations used to create an image and their range of movement. If all source positions were quite close to the source position shown in Fig. 11.9a, all partial images would be nearly identical and the complete image would appear quite like the single partial image shown in Fig. 11.11. To illustrate how the number of source locations employed in image reconstruction affects the final image, an image of the 5 m radius circle is reconstructed using two, three, and eleven distinct source positions. The resulting images are displayed as gray-scale plots where white is assigned to background wave speed, 0.1 m/ns, black is assigned to the wave speed of the circle, 0.2 m/ns, and shades of gray are assigned to intermediate values of wave speed. Thus, a perfectly reconstructed image would appear as a black 5 m radius circle against a white background. Figure 11.13 presents the image from the two partial images shown in Figs. 11.11 and 11.12c as well as an image reconstructed from three source locations. For the two-source reconstruction (Fig. 11.13a), the image is poor. Specifically, there is poor shape reproduction and the maximum wave speed occurs in the vicinity of the circle but only has a value of 0.13 m/ns, considerably lower than the actual value of 0.2 m/ns for the circular inclusion. The shape reconstruction is considerably improved for the image based on three sources (Fig. 11.13b) where there is an elliptic area of increased wave speed (gray) in the center of the imaged region. At about 0.14 m/ns, the reconstructed wave speed of the object is still too low.
Increasing the number of source locations to eleven substantially improves the quality of the image (Fig. 11.14). Although horizontally elongated, here the image is near-circular and is characterized by a reconstructed wave speed of 0.2 m/ns, the correct value for the circular object.
The astute reader may have recognized that when a wave passes from a material with a particular wave speed into a material of a different wave speed, there will be a refraction unless there is normal incidence. The phrase 'assumed to be straight' was presented in bold face in the previous section to call attention to the fact that this is an assumption in straight ray backprojection. One potential source of error associated with the straight ray assumption in the presence of refraction is illustrated in Fig. 11.15.
On the right in this figure, a ray is traced from the source to the boundary of a circular target, through the target after refraction, then to the receiver array following a second refraction upon exiting the circle. A second ray is also shown. This ray is straight, not passing through the circle, but arriving at the same point on the receiver array as the ray that does pass through the circle. Since these two rays travel over different distances and with different average wave speeds, they can arrive at the receiver at different times. This can give rise to a radargram such as that shown in the right of Fig. 11.15. This is a case of multipathing first introduced in Sect. 9.2.2.
The direct arriving rays can be removed by subtracting predicted arrival times for the background wave speed to create a time shadow (Sect. 11.3, Fig. 11.6). This procedure only 'cures' part of the problem. A second complication is illustrated in Fig. 11.16. This figure traces several refracting rays through the target to illustrate that the time shadow is broader than that which would occur if all rays were straight. Also illustrated in this figure is a partial image that would result from straight ray backprojection if all rays were, indeed, straight (the darker triangle in Fig. 11.16). The light gray triangle is the partial image resulting from the straight ray backprojection of the actual refracted rays. It is obvious that this partial image is too broad and this broadening of each partial image will produce a blurred image reconstructed from many partial images (views).
There is a method by which the refraction that occurs can be more correctly accounted for in imaging. This method is called diffraction tomography. Diffraction tomography is quite mathematical and, consequently, beyond the scope of this book. However, it mimics the process of image formation employed in optical holography with the general diffraction tomography imaging procedure illustrated for the cross-borehole geometry in Fig. 11.17a. Rather than using each source location independently, the acquired data for all sources are combined in such a way that plane wave propagation in a certain direction is synthesized. This process is referred to as synthetic aperture. In Sect. 6.4 it was shown that a plane water wave can be synthesized by dropping pebbles into the water and, as shown in Fig. 6.20, when a row of dropped pebbles strikes the water surface simultaneously, a plane wave is created that propagates in a direction perpendicular to the row of pebbles. Similarly, for radar or acoustic waves, a plane wave can be synthesized from a row of source positions by simultaneously 'discharging' each source. This will produce a synthesized plane wave propagating perpendicular to the source line. To vary the view, defined here to be the direction of propagation of a synthesized plane wave, the discharging of each source can be sequentially delayed. For the row of pebbles dropped into the water, this is equivalent to dropping the pebble on the left of Fig. 6.20 first, the one to its immediate right next, and so on. It was further established in Sect. 6.4 that a plane wave has all of its rays parallel (Fig. 6.19a). This is also a characteristic of laser light used in optical holography and this is the first parallel between holography and diffraction tomography. The use of plane waves provides some degree of focus. However, their use alone is insufficient for a fully focused image. Additional focus is provided by the application of the mathematical equivalent of a holographic lens. The use of a particular plane wave direction and the application of the holographic lens provide a partial image that, by itself, is quite good but somewhat elongated. Reconstructing a full image from partial images of different views (plane wave directions) yields images that can be superior to those of straight ray backprojection and relatively free of artifacts associated with refractions. The diffraction tomography imaging process can be applied to other measurement configurations.
Conceptually, this process requires that a synthetic aperture lens be applied to the sources to synthesize plane wave illumination, and that a holographic lens be applied to the received signal to fully focus the image. Of course these must be mathematical rather than physical lenses. Figure 11.17b shows the lens configuration for the surface-to-borehole geometry. The only difference between this and the cross-borehole geometry (Fig. 11.17a) is the position of the synthetic aperture lens which is repositioned to be consistent with the source locations. This repositioning indicates a change in the mathematical formula for the synthetic aperture lens.
It is possible to image in reflection measurement geometry, such as that used in ground penetrating radar. The reason that imaging in such a geometry was not introduced in the discussion of straight ray backprojection (Sects. 11.1 and 11.2) is that, in reflection measurements, rays can never be considered straight. For a wave to be detected in a reflection geometry, it must undergo a reflection when encountering a material having a wave speed different from its surrounding and this reflection obviously changes the ray direction.
In Sect. 11.5, a more sophisticated imaging concept was presented where the motivation was to account more rigorously for the effects of refraction through the application of synthetic aperture and holographic lenses. One fact that was omitted in this discussion of diffraction tomography is that these lenses also include the effects of reflection. Information from wave reflection is not limited to reflection measurement geometries and can, in fact, occur in transmission measurements. Figure 11.15 illustrated the effect of multipathing in a cross-borehole transmission measurement where one ray path is the straight ray from source to receiver and the second is refracted through a circular object. For the same measurement geometry but a different source position (Fig. 11.18), there is again multipathing. In this case, one ray path is direct from source to receiver while the other is a reflected ray path.
The concepts of diffraction tomography-based imaging in a reflection measurement geometry are the same as those considered for transmission mode geometries (Fig. 11.17). For clarity, these concepts are depicted in two steps in Fig. 11.19 where, in the first step, a synthetic aperture lens is applied to an array of point source locations distributed along a line on the ground surface (Fig. 11.19a). These can be multiple sources or a single source that is sequentially moved. The synthetic aperture lens converts the collective source output to plane waves over some range of propagation directions. The final step in the imaging process is to apply a holographic lens to the plane waves reflected from buried objects (Fig. 11.19b) and received along an array of receiver locations on the ground surface. This produces a focused image of a vertical cross-section below the source/receiver line.
To demonstrate tomographic imaging, a reflection-based imaging procedure is applied to data previously introduced. Specifically, the data used is the ground penetrating radar data acquired over Line 1 at Area 2 in the integrated case studies (Sect. 9.8.2, Fig. 9.42). Below this radar line a utility tunnel was detected based on an interpretation of the acquired data. The conclusion that this feature is a tunnel was based on the manner in which the anticipated features of the tunnel—a roof, floor, utilities within—should have been manifest in the data. Furthermore, the depth of specific features of the tunnel could be estimated based on observed travel time and some assumption of wave speed. Interpretation can be simplified considerably by reconstructing an image from the acquired data. Figure 11.20 presents the tomographic image of a vertical cross-section below GPR Line 1 in Area 2 (Fig. 9.41) from the data shown in Fig. 9.42. Here, the image is displayed as a gray-scale plot and, as evident from the palette on the right, reconstructed values range from -0.3 to 0.15. While the upper value of 0.15 is reasonable for an electromagnetic wave speed, the negative lower value,-0.30, is clearly not possible since a negative wave speed is meaningless. The reason for this strange value is that wave speed cannot be reconstructed in a reflection geometry; instead, the reconstructed 'property' is the reflection coefficient (Sect. 9.1.2). As annotated on the image, the top of the tunnel occurs at a depth of about 25 cm and the imaged reflection coefficient is positive. This is consistent with the definition of the reflection coefficient (Equation 9.1), and a transition from lower wave speed (concrete sidewalk) to higher wave speed (air). Conversely, at the tunnel bottom at a depth of about 2 m, there is a transition from high wave speed (air) to low wave speed (concrete) and the image properly reproduces a negative reflection coefficient.
Other features that clearly appear in the image are utilities within the tunnel and layers. In comparing the image (Fig. 11.20) to the GPR data from which it was derived, it is clear that all features are better resolved in the image. Both the top and bottom of the tunnel are flat and the utilities within the tunnel are near-circular. The sidewalls of the tunnel do not appear in the image because, in this measurement geometry, no reflections from the sidewalls can be captured (Fig. 9.15a). In addition, the imaging process has properly converted from travel time to depth so that the location of features can be determined in both lateral position and depth with reasonable precision.