VECTOR data is as frequently encountered, and as important, as scalar data. Strictly put, a vector is a tuple of n scalar components v = (v1,...,vn), vi ∈ ℝ. An n-dimensional vector describes, for example, a position, direction, rate of change, or force in ℝn. However, the majority of visualization applications deal with data that describes physical phenomena in two- or three-dimensional space. As a consequence, most visualization software defines all vectors to have three components. 2D vectors are modeled as 3D vectors with the third (z) component equal to null. Although one could provide separate implementation-level support for 2D vectors, this would massively complicate the structure of visualization software, lead to code replication, and ultimately reduce performance.
Recalling from Section 3.6, vector fields are functions f : 𝒟 → ℝ3, where 𝒟 is usually a subset of ℝ2 or ℝ3. Vector datasets are samplings of vector fields over discrete spatial domains. In this chapter, we shall discuss a number of popular visualization methods for vector datasets: vector glyphs, vector color coding, displacement plots, stream objects, texture-based vector visualization, and the simplified representation of vector fields.
A very important application domain for vector visualization is computational fluid dynamics (CFD). CFD simulations are able to predict the time-dependent behavior of compressible 3D fluid flows consisting of several potentially interacting substances, or species, having different densities and pressures, over complex spatial geometries. The solution of a CFD simulation consists of several datasets, each for a different time step. For each time step, several attributes are computed and stored into the solution dataset, such as velocity, pressure, density, flow divergence, and vorticity. Since divergence and vorticity are fundamental concepts in understanding the structure of vector fields and are used by many visualization methods for vector data, we shall detail these notions first.
We begin our discussion by first describing a number of fundamental mathematical operators that are used to analyze vector fields (Section 6.1). Next, we present vector glyphs, one of the simplest and most popular techniques used to visualize such fields (Section 6.2). The use of scalar visualization techniques to depict vector fields is discussed in Section 6.3. We then introduce the displacement plot technique for visualizing vector data (Section 6.4). Section 6.5 presents stream objects, which use integral techniques to construct paths in vector fields. Section 6.6 discusses the use of textures for visualizing vector fields. Section 6.7 discusses a number of strategies for simplified representation of vector datasets. Section 6.8 presents a number of illustrative visualization techniques for vector fields, which offer an alternative mechanism for simplified representation to the techniques discussed in Section 6.7. Finally, Section 6.9 concludes this chapter.
Divergence and vorticity are important quantities for vector field visualization, but also for the visualization and processing of other types of datasets, such as meshes, images, and scalar and tensor fields.
Given a vector field v : ℝ3 → ℝ3, the divergence of v = (vx, vy, vz)1 is the scalar quantity
Intuitively, if v is a flow field that transports mass, div v characterizes the increase or loss of mass at a given point p in the vector field in unit time.
A positive divergence at p denotes that mass would spread from p outward. Positive divergence points are called sources (see Figure 6.1(b)).
A negative divergence at p denotes that mass gets sucked into p. Negative divergence points are called sinks (see Figure 6.1(c)).
A zero divergence at p denotes that mass is transported without getting spread or sucked, i.e., without compression or expansion.
An equivalent definition of the divergence of v at a point p is
Here, Γ is a closed hypersurface (a curve for 2D vector fields and a surface for 3D vector fields) around the current point p, |Γ| is the area (2D) or volume (3D) of the space enclosed by Γ, and nΓ is the outward normal of Γ (see Figure 6.1(a)). The integral in Equation (6.2) computes the flux that the vector field transports through the imaginary boundary Γ. The limit Γ → p describes a curve that shrinks around the current point p until it becomes infinitesimally short.
Figure 6.2(a) shows the divergence of a 2D flow field using a blue-to-red colormap. The vector field is visualized with arrow glyphs for illustration purposes. Red areas indicate high positive divergence, or sources. Two such sources are clearly visible. Blue areas indicate high negative divergence, or sinks. Within the dark blue area, two pronounced sinks are visible. If we correlate the divergence and vector glyph visualizations, we get the image of a flow field that emerges from the sources and ends up in the sinks.
Given a vector field v : ℝ3 → ℝ3, the vorticity of v, also called the curl or rotor of v2, is the vector quantity
The vorticity rot v of v is a vector field that is locally perpendicular to the plane of rotation of v and whose magnitude expresses the speed of angular rotation of v around rot v. Hence, the vorticity vector characterizes the speed and direction of rotation of a given vector field at every point. In some textbooks, the rotor is also denoted as curl v.
An equivalent definition of the vorticity of v at a point p is given using line integrals. If we take any plane ∏ passing through p, and n is the normal to Π, and consider a closed curve Γ ⊂ ∏ around p, then the projection of rot v on n is given by
where ds is the infinitesimal oriented displacement on Γ (see also Figure 6.1(d)).
Vorticity signals the presence of vortices in vector fields. An informal definition of a vortex is a region where the vector field locally circles around a point called the vortex center. High-vorticity areas (such as in Figure 6.1(e)) indicate the presence of vortices. High vorticity and high divergence are typically complementary. For example, the divergence of the high-vorticity field in Figure 6.1(e) is zero and the vorticity of the high-divergence fields in Figures 6.1(b) and (c) is also zero. However, this is not always the case: For vector fields that spiral around a point, i.e., quickly rotate around the point while being in the same time converging into, or diverging out from, that point, both divergence and vorticity can be high. Additionally, when combining divergence and curl metrics in the construction of a visualization, it is useful to recall that div rot v = 0 for any vector field v.
Figure 6.2(b) shows the absolute value of the vorticity of a velocity field from a magnetohydrodynamic (MHD) simulation [Brandenburg 03], using a blue-tored colormap. The field itself is visualized with arrow-capped stream tubes, a technique detailed in Section 6.5. Blue areas indicate low-vorticity, laminar regions. Red areas indicate high-vorticity regions. We can see two types of such regions. Two small circular red spots indicate localized vortices, which are also outlined by the circling streamlines. Several elongated thin red strips indicate areas where the vector field quickly changes direction. Given the shape, these are not vortices, but separation lines that divide regions where the flow has opposite directions. If one is interested in locating such high-vorticity areas, a simple method of reasonable accuracy is to contour the vorticity field for high isovalues and select the data points inside such a contour.
Figure 6.3 visualizes the vorticity of a more complex turbulent 2D flow. Blue and red indicate respectively counterclockwise and clockwise spinning vortices. Green indicates low-vorticity, laminar regions. The image clearly conveys the high complexity of the flow. A typical pattern for such phenomena, which are also known as turbulent flows, is the high number of vortices and the alternation of their spinning directions. Such flows are encountered in the study of aerodynamics and fluid dynamics and can exhibit highly complex patterns. Understanding such patterns is one of the important ongoing challenges of scientific visualization.
The streamwise vorticity Ω of a field v is the scalar quantity equal to the projection of rot v along v itself:
Intuitively, Ω describes how quickly v turns around itself.
Another quantity used to characterize vector fields is helicity. Helicity is defined as one-half the scalar product of the velocity and vorticity vectors. Intuitively, helicity describes the extent to which the vector field exhibits a corkscrew-like local motion. Helicity is a conserved quantity if the flow is inviscid and homogeneous in density. Helicity is useful in weather studies for understanding severe convective storms and tornadoes, since in strong updrafts, the velocity and vorticity vectors tend to be aligned, yielding high helicity [Majda et al. 01].
To compute divergence and vorticity and related quantities, we need the partial derivatives of the vector field components. On a discrete dataset, these can be approximated using the formulas specific for every grid cell type given in Section 3.7. However, as with all discrete datasets, derivatives can be sensitive to noise, so the results of such computations must be interpreted carefully.
Vector glyphs are probably the simplest, and fastest, and most popular technique for visualizing vector fields. The vector glyph mapping technique essentially associates a vector glyph, or vector icon, with every sample point of the vector dataset. Various properties of the icon, such as location, direction, orientation, size, and color, are adjusted to reflect the value of the vector attribute it represents. The name glyph, meaning “sign” in Greek, reflects this principle of associating discrete visual signs with individual vector attributes. Every glyph is a sign that conveys, by its appearance, properties of the represented vector, such as direction, orientation, and magnitude.
There are many variations of this framework for vector glyphs. Essentially, they propose various trade-offs between sampling density (how many glyphs we can display on a given screen area) and number of encoded attributes (how many attributes we can display per glyph). We shall present a number of vector glyphs, starting with the simplest one: the line. Lines essentially show the position, direction, and magnitude of a set of vectors. Given a vector dataset defined on a sampled domain D, we associate a line l = (x, x + kv(x)) with every sample point x ∈ D that has a vector attribute v(x). The parameter k represents the scaling factor used to map the vector magnitudes to the geometric domain. Oriented line glyphs are sometimes also called hedgehogs, due to the particularly spiky appearance of the visualization.
Figure 6.4 shows a line glyph, or hedgehog, visualization of a 2D vector field defined on a square domain. The vector field is the MHD dataset earlier used in Figure 6.2(b). The original uniform dataset has a resolution of 256 × 256 sample points. The images show the hedgehog visualization of the vector field uniformly subsampled in both x and y dimensions at a rate of 2 (see Figure 6.4(a)), a rate of 4 (Figure 6.4(b)), and a rate of 8 (Figure 6.4(c)). In all these images, the line glyphs are scaled proportionally to the vector field magnitude, the scaling factor k being proportional to the subsampling rate. In Figure 6.4(d), the vector field is uniformly subsampled at a rate of 8, but the line glyphs are all scaled to the same length. The glyphs are colored by color mapping the vector field magnitude scalar field to blue-to-red colormap. In this way, color cues strengthen (or replace) length cues to convey information about the vector magnitude. In many applications, color is used to show other scalar fields related to the vector field, such as pressure, temperature, or density.
Looking at Figure 6.4, we can make a number of important observations. First, it is clear that high-resolution vector datasets must be subsampled in order to be visualized with hedgehogs. Comparing Figures 6.4(a), (b), and (c), we can argue that it is easier to comprehend the vector field in the last image than in the first two, as the line glyphs are longer, hence their direction and orientation are easier to discern. The direction is even easier to follow in the last image (Figures 6.4(d)), where all glyphs have the same, relatively large, size. Hence, the clarity of hedgehog visualizations depends strongly on the glyph scaling factor. Ideally, a glyph should be as large as possible, since larger glyphs have an easier perceivable direction, but not too large, so it would not intersect neighboring glyphs. If we scale all glyphs to the same size, as in Figure 6.4(d), this constraint is easy to obey by scaling each glyph to the average cell size at its origin. This removes clutter, but eliminates the use of the glyph size (length) as a visual cue for the vector field magnitude. If we scale the glyphs to reflect the vector field magnitude, such as in Figures 6.4(a)–(c), eliminating clutter is more problematic. We could still use a unique glyph scaling factor k so that all glyphs are locally smaller than the cell size. Another option is to use a nonlinear term kv, which, e.g., has constrained minimal and maximal values or has a logarithmic, instead of linear, variation with |v|. This will prevent clutter and guarantee glyph visibility, but will drop the one-to-one relationship between vector magnitude and glyph length.
More complex shapes can be used for glyphs besides lines. Figure 6.5 shows the same 2D vector field as in Figure 6.4, this time visualized with 3D cone and arrow glyphs. Such glyphs have the advantage of being able to convey a signed direction, whereas lines convey an unsigned direction only. However, these glyphs also take more space to draw, so they increase the clutter or require lower-resolution datasets. An interesting compromise between arrows and lines is to use Gouraud shaded lines. By shading the line glyph from full color (or full opacity) at the glyph origin to the background color (or full transparency) at the line tip, a visual effect similar to a thin arrow can be obtained without the need for extra screen space.
By using even more complex glyph shapes, we can encode more attributes than the vector field itself. This feature is needed in situations when one has to analyze not just the relative behavior of a single (vector) field in different spatial regions, but the correlations between several scalar and vector fields. Such situations occur frequently in computational fluid dynamics (CFD) simulations. A 3D CFD solution consisting of flow velocity, vorticity, divergence and material density, pressure, and temperature offers 3 + 3 + 1 + 1 + 1 + 1 = 10 attributes per dataset point. To visualize all this information, we would need to design a glyph with 10 degrees of freedom. Such glyphs have been designed and used in the visualization of fluid flow [van Walsum et al. 96], albeit with limited effectiveness.
The trade-off between the power of expression of glyphs, or number of attributes they can encode, and minimal screen size needed by a glyph is an important characteristic of glyph-based visualizations. To understand this better, let us compare these for a moment with the color-mapping visualizations discussed in Section 5.1. In both cases, the data attributes are available only at the discrete sample points of a dataset D. However, color mapping is typically applied at every point of the dataset D, either via texture-based interpolation or via the vertex-based color interpolation provided by the polygon rendering machinery. We say that color mapping produces a dense visualization, where every pixel represents an (interpolated) data value. In contrast, most glyph-based visualizations for vector data cannot have this freedom. Since a glyph takes more space than just a pixel, we cannot draw one glyph at every pixel of a given dataset. All glyph visualizations share this inherent discreteness, or sparseness, of the output. This affects the inverse image-to-data mapping (see Chapter 4) at the core of the visualization process.
Consider a zoomed-in detail showing a hedgehog plot over a single cell of a 2D vector field (see Figure 6.6). In the first case (see Figure 6.6(a)), the vector field variation over the displayed cell is quite small. We can easily interpolate mentally the displayed arrow glyphs and arrive at the conclusion that the vector field has an upper-right direction and orientation, and increases in magnitude in this direction. In the second case (see Figure 6.6(b)), the situation is more problematic. The vector field varies greatly between the vertices of the considered cell, so it is harder to mentally interpolate between these four vector glyphs and get an idea of how the field actually behaves over the considered surface. Clearly, the interpretation can get very confusing when we have hundreds of cells in this situation.
This difference between scalar (color-mapped) visualizations and vector (glyph) visualizations can be explained in sampling terms. Scalar color-mapping techniques such as the ones discussed in Section 5.1 produce a piecewise linear visualization. Glyph techniques produce a purely discrete visualization. In the first case, we do not have to mentally interpolate between drawn pixels, as the graphics hardware has done this task for us. In the second case, we only have visual indication at the sample points (e.g., cell vertices), so we must do this interpolation ourselves. When the visualized signal varies smoothly, like in Figure 6.6(a), this task is relatively easy. When this is not the case, like in Figure 6.6(b), we have a harder problem. The task is made more difficult when we have to interpolate between directions and orientations, as in the case of vector glyphs, since this is apparently not easily done by the human visual system.
Another problem of vector glyph visualizations is caused by the regular pattern of the sample points present in uniform and rectilinear grids. The problem is visible in the central area of Figures 6.4(c) and (d). In these regions, the perception of the diagonal orientation of the vector glyphs is weakened by the regular vertical pattern of the uniformly distributed sampling points. This problem affects dense visualizations to a much lesser degree. The regular subsampling problem is present also when the subsampling grid is not aligned with the original grid. Figure 6.7(a) shows an arrow glyph visualization of the same 2D vector field as in Figure 6.4, this time subsampled on a rectilinear grid rotated approximately 30 degrees with respect to the original dataset grid. The undesired visual interference between the grid lines and glyph directions is clearly visible. By subsampling the dataset using a randomly distributed (instead of regularly arranged) set of points, the problem can be alleviated. This is illustrated in Figure 6.7(b), where we use the same dataset and subsampling rate as before but a random point distribution instead of a regular one.
Vector glyphs can be used to visualize 3D vector fields, too. Figure 6.8 shows an arrow glyph visualization of a 3D vector dataset sampled on a uniform grid containing 128 × 85 × 42 data points that describes the flow of water in a box-shaped basin that has an inlet, located upper-right, an outlet, located lower-left, and two obstacles (not drawn in the figure) that cause the sinuous behavior of the flow. Visualizing such a dataset with vector glyphs at full resolution would produce a completely cluttered result. Randomly subsampling the dataset to 100,000 points and visualizing it with line glyphs produces the result shown in Figure 6.8(a). Besides the known problems of glyphs in 2D, an additional problem of 3D glyph visualization becomes apparent here: occlusion. Closer glyphs obscure further ones, which makes understanding the flow behavior deep inside the dataset quite difficult. Note that using arrow instead of line glyphs only increases the occlusion problem, as arrows have a larger screen area than lines.
We can alleviate the occlusion problem by further subsampling the dataset to only 10,000 data points (see Figure 6.8(b)). However, the dataset is now too sparse to be able to distinguish local details. A different way to tackle the occlusion problem is to draw the glyphs transparently. Figure 6.8(c) shows the same visualization as in Figure 6.8(a), but this time using line glyphs with a transparency of 0.15. Closer glyphs now cause less occlusion, allowing us to “see” deeper inside the dataset. An interesting visual effect is achieved by using monochrome, instead of color mapped, transparent line glyphs (see Figure 6.8(d)). Here, a single color (black) is blended, so the resulting visualization is easier to interpret. The high velocity “flow core” located at the center of the fluid flow is now easily visible as a dark region. We shall investigate transparency-based techniques for visualizing 3D datasets in more detail later in Chapter 10, when discussing volume visualization methods.
In addition to 2D (planar) surfaces and 3D volumes, glyph-based visualization can be used on 3D surfaces embedded in volumetric datasets. The idea behind this technique is similar to the color-mapping technique on 3D surfaces presented for scalar fields in Section 5.1. First, we select a surface of interest from a given 3D dataset. Next, we draw vector glyphs at the sample points of the surface. Figure 6.9 illustrates this. The surface of interest is a velocity magnitude isosurface of the vector field itself. The selected isovalue isolates the flow core, i.e., the region where the velocity magnitude is equal to about half the maximal velocity over the whole dataset. The square shapes of the inlet and outlet are now visible. Figure 6.9(a) shows the isosurface and line glyphs rendered on the isosurface itself. The glyphs are not color mapped by velocity magnitude as before, since they all have the same length given the definition of the supporting surface. Figure 6.9(b) shows a variant of the previous visualization. Here, the support surface is not shown, which allows us to see all the vector glyphs that were previously masked by the surface. To diminish occlusion, we use a transparency of 0.3 for the glyphs.
An important observation about vector glyph visualizations on surfaces is that the vectors do not have to be tangent to the surface. This condition holds only for a special type of surfaces called stream surfaces, which are discussed further in Section 6.5. Glyph visualizations on such surfaces are easier to understand, since the glyphs tend to stay on the surface, rather than on surfaces on which the field is not tangent, where the glyphs get visually entangled and cause more visual clutter.
Summarizing our discussion, the main advantages of glyph-based visualization of vector fields are the simple implementation and intuitive interpretation of glyphs such as arrows. However, as we have seen, these advantages are offset by several problems, such as occlusion, subsampling artifacts, and potentially difficult visual interpolation of directions. The interpretation difficulties caused by these problems can be partially overcome by techniques such as random subsampling, carefully setting glyph lengths, and, in the case of 3D glyph visualizations, by interactively investigating the result by 3D manipulation. In the following sections, we shall present several other visualization techniques for vector fields that try to alleviate these problems.
As we have seen in the previous section, dense visualizations, such as color-mapped surfaces, have several advantages compared to sparse visualizations, such as glyphs. The natural question that arises is whether we can develop dense visualizations for vector fields, similar to the color-mapped surfaces used for scalar fields. One of the simplest techniques to produce such visualizations is vector color coding. Similar to scalar color mapping, vector color coding associates a color with every point of a given surface on which we have defined a vector dataset. The color is used to encode the vector orientation and direction attributes. Vector color coding can be easiest understood if we represent colors in the hue-saturation-value (HSV) system introduced in Chapter 2. Colors in the HSV system can be visualized using a so-called color wheel, such as the one shown at the right in Figure 6.10. Every distinct hue corresponds to a different angle of the color wheel: red is 0°, magenta is 60°, blue is 120°, cyan is 180°, green is 240°, and yellow is 300°. Saturation is represented as the distance from the wheel center to a given color point. Value is usually represented as a separate one-dimensional “luminance” parameter, since the color wheel can encode only two distinct parameters.
Vector color coding for 2D vector fields proceeds as follows. Assume we have a color wheel of unit radius and all vectors in the 2D dataset are scaled so that the longest one has unit length. Under these conditions, every vector is represented by the color it points to if we place it at the center of the color wheel. The vector orientation is encoded in the hue and the vector length in the value. The saturation parameter is set to one, i.e., we use only fully saturated colors. The color coding process is applied for every point of the dataset, similarly to the scalar color coding, either via texture or polygon color interpolation (see Section 5.1).
Figure 6.10(a) shows the vector color coding for the same 2D vector dataset as was used in the previous section. Clearly, this image does not suffer from the sampling problems discussed for glyph visualizations, which is a positive element. Low-vector-magnitude regions can be easily detected as dark (low value) areas, whereas high-vector-magnitude regions show up as brightly colored areas. However, in contrast to the intuitive arrow plots, this visualization is highly abstract. The inverse mapping from hue to vector orientation takes quite some time to be learned, so users have to be trained extensively to interpret such images.
Several variations of the basic idea exist. If we are interested only in the vector orientation and not the magnitude, we can set the value component to one, and we obtain the visualization shown in Figure 6.10(b). Here, the orientation patterns of the vector field are easier to distinguish than in Figure 6.10(a), since the image is brighter. Besides the standard color wheel containing all rainbow hues, other color wheels can be used to emphasize on certain orientations, similar to the various colormap manipulations described in Section 5.1 for scalar fields.
Besides the directional color coding, we can also directly encode the vector components vx, vy, vz into colors. In this setting, a 3D vector field is visualized by three separate scalar color-mapped fields. Although this method is probably the simplest way, from a technical perspective, to produce a visualization of a vector field, it has limited effectiveness. The user must visually correlate the same locations in three color images to get insight into the vector data at that location. Even if the user were able to accurately identify the location of the same spatial point in three different images, mentally performing three separate color-to-scalar mappings independently is a very hard task. Furthermore, it is difficult to imagine the direction of a vector just by looking at three scalar fields representing its components. For a more involved discussion on the reasons to avoid this type of color coding, see Colin Ware’s book on information visualization [Ware 04]. All in all, this method is seldom used, except in cases when users are very familiar with the vector field structure and domain shape, e.g., due to a low variability, and want to look for specific outlier-like details.
Vector color coding can also be applied to 3D surfaces. However, the mapping of a 3D orientation to hues on the color wheel is not as simple as in the 2D case. Although such mappings can be done, for example by using the hue channel to encode the x and y vector components and the value channel to encode the z component, performing the inverse mapping from hue to 3D orientation visually is generally a very challenging task. If we are interested in less than a general mapping, the problem becomes simpler to tackle. For example, let us again consider the 3D fluid flow dataset discussed in the previous section, which we have visualized in Figure 6.9 with a velocity magnitude isosurface. We may be interested in seeing how much the actual flow direction differs from the isosurface normal or, in other words, how far this isosurface is from a stream surface tangent to the flow. For this, we can color-code the angle α between the surface normal n and the vector data v, which can be computed as
The result, shown in Figure 6.11, is a scalar field that can be visualized with color mapping. On top of the colored isosurface, the vector field itself is visualized with semitransparent line glyphs. The large green areas indicate that the vector field is close to tangent to a large percentage of the given surface. The dark blue area in the upper-right part of the image indicates a region where the vector field “exits” the surface. This is the region where the inflow bounces straight against the upper obstacle inside the box (drawn in light gray). In contrast, the red region in the middle of the image indicates a region where the vector field “enters” the surface. In this area, the flow starts being deflected by the lower obstacle in the box.
Summarizing, vector color coding solves many of the technical problems that glyph plots suffer from. However, its lack of intuitiveness makes it not so popular outside specialized areas.
Vector glyphs, described in the previous section, can be understood in terms of displaying trajectories. The vector glyph with the origin at some point p can be seen as the trajectory p would follow in v(p) over a short time interval Δt. The vector glyph shows both the start and end points of the trajectory, i.e., p and p + v(p)Δt, respectively. Displacement plots take a different approach by showing only the end points of such trajectories. Given a surface S ∈ D inside the domain D of a vector field, where S is discretized as a set of sample points pi, a displacement plot of S is a new surface S′ given by the set of sample points
In Equation (6.7), v′ is a vector field that controls the displacement of the surface S and k is the displacement factor (analogous to Δt) that controls how pronounced the displacement is.
In the simplest case, we can set v′ = v, and displace the surface S in the direction of the actual vector field itself. Figure 6.12 demonstrates the displacement plot technique for the 3D flow dataset introduced in Section 6.2 for two planar surfaces orthogonal to the x-axis (Figure 6.12(a)) and y-axis, respectively (Figure 6.12(b)). Both examples use a displacement factor k = 20. In this example, the displacement plots are colored by the vector field component on which the input surface is perpendicular, i.e., |vx| for Figure 6.12(a) and |vy| for Figure 6.12(b). Blue shows the minimal (negative) displacement, red is the maximal (positive) displacement, and green indicates a nondisplaced point with vector value close to zero. The color mapping enhances the information provided by the displaced surface geometry. To ease the interpretation, the visualization is enhanced with semitransparent vector glyphs.
A natural interpretation of a displacement plot is to think of it as being the effect of displacing, or warping, a given surface in the vector field. For this reason, displacement plots are sometimes also called warped plots. Displacement plots have the major advantage that they produce a visually continuous result—at least when they are applied on a continuous input surface. However, displacement plots produce a more abstract, less intuitive visualization than simpler methods such as vector glyphs. In Figure 6.12(a), for example, the red areas, warped forward in the direction of the x-axis, indicate regions where the fluid flow strongly follows the inlet-to-outlet direction (see Figure 6.9 for the position of the flow inlet and outlet). Blue regions are also interesting, as these indicate a backward flow that goes against the main stream. In such regions, phenomena such as vortices can occur. Figure 6.12(b) tells a similar story, but now from the perspective of the y-axis.
Note, however, that the displacement plot technique presented here is not the same as the height-plot technique described in Section 5.4. Height plots visualize a scalar field by warping a given surface along its normal, i.e., set the warping direction to v′ = sn (Equation 6.7), where n is the surface normal and s is some scalar field that reflects the vector field properties. For instance, we can set s to ||v|| to show the magnitude of v, or respectively to v · n, to show the magnitude of v in the direction orthogonal to our surface S. In contrast, displacement plots visualize a vector field by warping a given surface along the vector field itself, i.e., do not use the surface normals. This is visible in Figure 6.12(b). Here, the front surface is warped outside the box-shaped flow domain in the area of the outlet. Similarly, the back surface is warped inside the flow domain in the area of the inlet. If we used height plots of, e.g., the velocity magnitude, these surfaces would have been warped in the direction of their normals, which would not have led to such effects.
Several elements control the quality of a displacement plot. First, the displacement factor k in Equation (6.7) must be carefully set. Values that are too large would warp the input surface too much, which can easily lead to self-intersecting surfaces. Even when self-intersection does not occur, large warp factors shift the displaced surface far away from its actual location. This conveys incorrect insight to the user, as one will visually associate the warped surface with the actual location p + kv(p) where this surface is drawn, and not with the original location p that the surface should depict. Values that are too small, on the other hand, do not show the warping effect strongly enough so that it is recognizable in the visualization and it lets the users map it back visually to a displacement value. Just as for color mapping, nonlinear scaling and clamping techniques can be applied to control the mapping of the vector data to displacement values. A second important parameter is the shape and position of the surface to be warped. These parameters actually control the set of points at which we are interested in visualizing the vector field. Planar surfaces are often a popular choice for displacement plots. Since they are flat, displacement values are easiest to distinguish on them. However, even when using planar surfaces, some care is needed. The worst case takes place when the surface is (almost) tangent to the vector field to be visualized. In this case, the warped surface stays in the same plane as the original surface, so the actual goal of using the warping as a visual cue for the vector field is not reached. Moreover, such situations easily lead to self-intersecting polygons on the surface.
Besides planes, other geometric objects can be used to create displacement plots. In Figure 6.13, two displacement plots were created using a sphere (left) and a box (right) surface, both colored by the velocity magnitude. The visual difference between the expected shape of the original object and the perceived (deformed) shape serves as a cue for the vector magnitude. In addition to surfaces, 1D curves can be used too, if desired. In general, the choice of the object to deform should be correlated with the expected vector field behavior and meaning. For example, if we have a force field describing material deformation in the 3D domain of some mechanical assembly, we can use the assembly’s own surface as input for the displacement plot. The obtained visualization conveys the “natural” meaning that the assembly’s surface is being deformed by the force acting on it.
Both the vector glyph and displacement plot techniques visually relate the actual position p of a sample point to its displaced position p + vΔt in the vector field v. We can think of vector glyphs as “trajectories” over a short time Δt of imaginary particles released in the vector field at some desired locations. Similarly, displacement plots can be seen as the end points of the trajectory over a short time Δt of a given input surface. Hence, the natural question arises whether we could use such trajectories, computed for longer time intervals, to visualize a given vector field.
Stream objects are the answer to this question. Rather than a single visualization technique, stream objects are more of a family of such techniques, related by the idea of visualizing the trajectory of some input object in a vector field over a given time interval.
We shall start exploring the stream objects family with its simplest, and probably most frequently used, members: the streamline variations.
Before we proceed, we must make a distinction between time-dependent and time-independent vector fields. Time-dependent vector fields v : D × T are defined over some spatial domain D and time interval T. That is, at the same spatial position x ∈ D, the vector field v(x,t) can have different values at different time moments t ∈ T. Time-independent, or stationary, vector fields v : D are defined over a spatial domain D and do not change value in time. The two types of vector field admit different visualizations, as follows.
For time-independent vector fields, a streamline is a curved path starting from a given point x0 which is tangent at v. If we model a streamline as a parametric function Sstream(τ) = x(τ), where τ represents the arc-length coordinate along the curve, then a streamline obeys the equation
This says that the streamline point at position x is tangent to the vector field v(x) at the same position. This can also be expressed in terms of the ordinary differential equation
with initial condition x(τ = 0) = x0 (the streamline must pass through a given seed point x0) and with the constraint τ ∈ [0, Smax] (the streamline has a length given by Smax). If we integrate Equation 6.9 over τ, we obtain the streamline
Streamlines are defined for time-independent vector fields. However, in various texts, streamline computation mentions a concept of time. This refers to the parameters τ and s in the above equations, which can be thought of as integration time, and should not be confounded with the physical time of a time-dependent dataset.
In case of time-dependent vector fields v(x,t), several types of curves can be integrated in the vector field. First and simplest, streamlines can be computed by fixing, or freezing, the physical time t to a desired value tfixed and using Equation (6.10) to compute the curves that a particle would follow in v(x, tfixed).
A second option is to compute the actual path x(t) of a particle in the time-dependent vector field, given by
starting from a given seed point x(t0) = x0. This is nothing but applying Equation 6.9 while making the integration time τ run in sync with the physical time t. The emerging curves Spath = x(t) are called particle traces, or pathlines. They describe the actual trajectory, over physical time, that a particle released at x0 follows in the time-dependent vector field.
For time-dependent vector fields, we can define a different type of curve besides pathlines. Take a fixed initial point x0 in the flow domain D. Over the considered time interval T, imagine that ink is continuously released from x0. The ink will get advected in the vector field v(x,t) and yield a time-dependent curve which is moved with the flow. This curve is called a streakline. Formally, denote a streakline at physical time t as a time-dependent curve Sstreak = x(τ, t), where τ is the arc length coordinate along the streakline for a fixed time t. Then we have
In other words, the end point (for τ = t) of a streakline coincides with the end point of the pathline starting at the same seed point x0 and running for the time t.
Note that, for stationary vector fields, all above concepts (streamlines, path-lines, and streaklines) are identical, i.e., they lead to the same curves. In the following, we shall focus on the computation of stationary streamlines. As such, and to simplify notation, all further references to “time” denote integration time.
For a sampled vector dataset, we must solve Equation (6.10) using the definition of v given by the reconstruction equation (Equation (3.2)). In the case of piecewise linear basis functions, we remember that this reconstruction gives the vector field over every dataset cell as a linear combination of the vector field values at the cell vertices. For contours (defined by Equation (5.4)), we used this property of the reconstruction to represent contours as a set of piecewise linear primitives (lines or planes), one primitive per dataset cell. However, we cannot use the same per-cell construction strategy for streamlines. Since the interpolated vector direction usually changes within every cell, the streamline segments determined by each cell are not straight lines. Instead, we shall compute streamlines by discretizing the integration time t.
Several numerical methods solve Equation (6.10) approximately by discretizing the time t and replacing the integral with a finite sum. The simplest, but also least accurate, method is the Euler integration, given by
The points xi sample the streamline starting from the seed point x0. The Euler integration considers the vector field v to be spatially constant and equal to v(xi) between every sample point xi and the next one xi+1. Hence, the streamline will be approximated by a piecewise-linear curve, or polyline (x0,...,xN).
Figure 6.14 shows a first example of streamlines. Here, we trace 18 ∗ 9 = 162 streamlines in a 2D flow vector field. The seed points, indicated by gray ball glyphs, are equally spaced in the x and y directions and are obtained by regularly subsampling the dataset on a grid of 18 by 9 points. The streamlines are colored by the vector magnitude using a blue (low speed) to red (high speed) colormap. All streamlines are traced up to the same maximal time T, but have different lengths, since the vector field has large variations in speed, as indicated by the streamline colors.
The streamline tracing process is described by Listing 6.1 for a 2D vector dataset. The code in Listing 6.1 accepts a vector dataset g
and traces a single streamline from a given location p0
using the integration step dt
. The streamline stops when it either exits the dataset, exceeds a maximal length maxL
, or exceeds a maximal time maxT
. The streamline is saved as a polyline into an unstructured grid s
. In this example, we use Euler integration and piecewise linear vector interpolation, but different strategies can be used if desired. In terms of cost, an important factor is the use of the Grid::findCell
method, which returns the ID of the cell containing the given world point. As explained in Section 3.8, this operation can be quite costly on structured and unstructured grids, as it involves searching.
Within a cell, the vector field is interpolated using the Grid::getC1Vector
method described in Section 3.8. To test whether the integration has exited the current cell, we transform the world coordinates p0
to parametric coordinates q
using the Grid::world2cell
method and check whether the parametric coordinates fall outside the [0, 1] range. In actual production code, the search of the cell containing a given location can be done more efficiently than using the generic findCell
, based on the heuristic that small integration steps potentially move the current point out of the current cell into one of its (direct) neighbor cells. Hence, the search can start checking these neighbors first. The setPoint
and setCell
methods add a point and a cell, respectively, to the streamline dataset (see Section 3.8). Specifically, the ith cell of the streamline is formed by the points i and i + 1 of the dataset. Finally, the length
function computes the length of the line segment with end points p
and p0
.
Several technical considerations arise when computing streamlines.
A first concern regards the accuracy of the integration method used. Euler integration has an error of O(Δt2), which means that halving the integration step Δt reduces the integration error by a quarter. However, numerical integration has the unpleasant property that it accumulates errors as the integration time T increases, since positions along the streamline are computed incrementally. This means, in practice, that the “tails” of long streamlines tend to deviate from their actual correct locations. The accuracy of integration can be improved by using higher-order methods. A frequently used replacement for the Euler integration is the Runge-Kutta method. This method approximates the vector field v between two sample points xi and xi+1 along a stream object with the average value
In contrast, methods such as marching cubes, for example, bound the error by the size of a cell, since they do not propagate information from cell to cell. Setting the integration step Δt to small values reduces such errors, but the integration takes more time.
Finding optimal values for Δt is, in general, a difficult problem. These depend locally on the dataset cell sizes, vector field magnitude, vector field variation, desired streamline length, and desired computation speed. By “locally,” we mean that it is often desirable to adapt Δt as the integration proceeds instead of using a constant Δt for the complete streamline. Although there is no silver bullet for setting Δt optimally, there are a few hints in this direction. Using a constant Δt is equivalent to a uniform sampling of the integration time dimension. For a vector field of varying magnitude, this obviously produces sample points xi that are spaced irregularly along the streamline, or a nonuniform sampling of the spatial dimension. This is often undesirable. Even when using a small Δt, large vector field values generate large streamline steps ||v||Δt that can skip several dataset cells, hence undersample the vector field. For a rapidly varying vector field, this can change the streamline direction dramatically, yielding a misleading visualization. A simple way to alleviate this problem is to adapt Δt locally to the vector field magnitude so that the spatial integration step ||v||Δt has constant length. Setting Δt so that this length is smaller than the current cell size ensures that no vector samples are skipped during the integration. In practice, spatial integration steps of around one-third of a cell size should yield good results for most vector fields.
A second issue regards the integration stop criterion. A maximal time criterion, such as in Equation (6.10), is quite nonintuitive, as the total streamline length largely depends on the actual vector field values and size of the domain. A better, more intuitive stop criterion is to set a maximal length for the streamline, e.g., as a fraction of the domain size. Separately, the integration process should be stopped when the vector field magnitude becomes zero, since the trajectory comes to an end there. In practice, one would stop integrating when the vector field magnitude ||v|| drops under some small value ϵ.
A third issue regards the geometric construction of the streamline. The sample code in Listing 6.1 adds one streamline point after each integration time step Δt. However, when using small Δt values (for precision), this would generate too densely sampled streamlines which can take considerable storage space and rendering time. An optimal solution would use the distance travelled so far along the integration path (variable l
in Listing 6.1) to determine the addition of new sample points: Given a user-supplied minimal distance Δl, we add a new streamline point whenever l
has increased with Δl from the last added point. This way, the time sampling Δt and spatial sampling Δl can be controlled independently.
A separate crucial issue for the effectiveness of streamline visualizations is the choice of the location and number of seed points. A streamline conveys information visually for the points on, or close to, its trajectory. However, when we choose a seed point, we don’t know in advance which points of a dataset the streamline will go through. Setting the seed point answers only the question “show all points that are on the trajectory starting here,” or “show where this point would be advected by the field.” A variant of this question is “show all points on the trajectory ending here.” This can be easily achieved by using negative Δt values, i.e., tracing the streamline upstream. By tracing both upstream and downstream streamlines, we can answer yet another variant of our question, i.e., “show all points on the trajectory passing through here.” One way to use streamlines is to densely sample some area of interest, or “seed area” in the dataset with seed points and next trace streamlines until some stop criterion is met. However, even if the seed area is densely sampled, this does not guarantee the complete dataset will be densely covered by streamlines. Depending on the actual field, areas in the dataset can even remain completely unintersected by streamlines, so the user cannot tell anything about the vector field there from the visualization. Conversely, other areas can be too densely covered by streamlines, leading to visual occlusion and clutter.
A different strategy for the seed point distribution is to densely sample the complete dataset instead of only some given area. The aim of this strategy is to produce visualizations that answer the question “show the complete dataset with streamlines.” The resulting visualization should have several properties, as follows [Verma et al. 00]:
Coverage: Every dataset point should be at a minimal (small) distance from some streamline, so that the user has some visual cue of what the field looks like close to that point. In a more relaxed setting, we can require that the streamlines should cover all important flow features. By important features, we mean all features of the vector field that are of interest for a given user in a given application context. Although task-specific, and thus hard to ensure in a unique manner, this is an important design principle. We can restate this also in terms of having streamlines sample the “feature space” densely enough, so that all items that are considered relevant features are reflected in the visualization.
Uniformity: The density of the streamlines in the final image should be confined between some minimal and maximal values. This translates into having a bounded streamline sampling density of the image plane. Indeed, if we have too many streamlines over a given area, cluttering will occur. If we have too few then undersampling occurs, i.e., we are not able to tell what the field looks like between two streamlines. This is relatively simple to ensure for 2D datasets, where the image space coincides with the geometric (dataset) space. For 3D visualizations, this is harder to do, as it requires either a view-dependent evaluation of the streamline density in image space or a more restrictive bounding of the volumetric streamline density.
Continuity: Longer streamlines are preferred over short ones, as they produce less discontinuous, easier to interpret, visualizations. On a more subtle level, visual continuity across a streamline is related to the uniformity criterion mentioned previously.
Just as for the vector glyphs, several sampling strategies are possible to create dense streamline visualizations. The simplest solution is to distribute the seed points regularly or randomly in the domain and trace streamlines of some minimal length. This solution gives good coverage but can easily lead to cluttering. A better solution is to trace a streamline until it gets closer to any of the already traced streamlines or itself than a user-specified small distance, thereby minimizing cluttering. We can combine this idea with an iterative insertion of seed points in the areas of the dataset that are undersampled, thereby maximizing coverage. Finally, we can discard streamlines that are too short and keep trying new seed points until the streamline length exceeds the desired threshold, thereby maximizing the streamline average length. A simple-to-implement and effective method for creating evenly distributed streamlines based on these ideas was proposed by Jobard and Lefer in [Jobard and Lefer 97]. This method produces all streamlines in a single pass, but has the tendency of favoring short streamlines instead of potentially longer ones, and also does not guarantee a uniform coverage of the computational domain with streamlines, or uniform streamline density. A related method based on optimizing a quality energy-like functional based on the positions of the seed points and streamline characteristics was proposed by Turk and Banks in [Turk and Banks 96]. This method has been extended to structured curvilinear grids [Mao et al. 98]. In contrast to the single-pass approach of Jobard and Lefer, the method of Turk and Banks starts with small streamline fragments, which are iteratively merged, shortened, lengthened, or new streamlines are created, until the desired uniform coverage is obtained. As such, this method generates on average longer streamlines than the approach of Jobard and Lefer, but is more complex to implement and computationally slower. Methods that sample the dataset with streamlines that attempt to cover all topological patterns of interest in the vector field are proposed in [Verma et al. 00] for 2D fields and in [Ye et al. 05] for 3D fields.
A different method for creating evenly distributed streamlines for 2D vector fields related to the previous techniques is the “farthest point streamline seeding” (FPSS) [Mebarki et al. 05]. Similar to the method of Jobard and Lefer, FPSS adds streamlines iteratively until the desired dense coverage is obtained. However, in contrast to its counterpart, FPSS starts integrating new streamlines as far away as possible from the existing ones. The rationale behind this heuristic is that such streamlines have the greatest chance of achieving maximal length before getting too close to existing ones, since they start far away from the latter. The heuristic to find the next best seed is to look for the center of the largest cavity, or gap, between existing streamlines. For this, FPSS first computes a Delaunay triangulation of all sample points of already added streamlines.3 The key idea is that the center of the largest circumscribed circle of all these triangles is a good estimate for the center of the largest existing cavity, thus a good estimate for the next streamline seed point to place. FPSS starts with a fine discretization of the computational domain border, which generates the first Delaunay triangulation. After this, streamlines are iteratively seeded from using the cavity center heuristic described above, and traced upstream and downstream until they get closer to any existing streamline than a user-prescribed distance that controls the target density. After each new streamline is added, the Delaunay triangulation is recomputed. To speed up the seed finding, triangles are kept in a priority queue sorted on their circumscribed circle radius. The method is simple to implement and generates densely seeded streamline images for large 2D vector fields in a matter of seconds on a modern computer.
Figure 6.15 illustrates the FPSS method.4 The first image (Figure 6.15(a)) shows a 2D vector field, visualized with arrow glyphs placed on an uniform grid. Figures 6.15(b,c) show two results of the dense seeding method, computed for two different density values. The first density value corresponds to the distance between neighbor arrow glyphs in Figure 6.15(a). The second density value is double the first value, thus generates roughly twice as more streamlines. As visible, the method effectively covers the data domain with long streamlines and also achieves a nearly uniform streamline density. As an additional display enhancement, streamlines are rendered using tapering, i.e., thicker close to their midpoints and thinner close to their start and end points. This achieves a visually more uniform coverage of the data domain with ink, since streamline midpoints are, by construction, farthest away from other streamlines.
Evenly-spaced streamline seeding is also applicable to 3D datasets. For 3D volumetric datasets, generating streamlines which are evenly spaced in the 3D world space can be done, e.g., by direct adaption of similar 2D methods. However, this does not guarantee that the rendered streamlines will be evenly spaced in screen space, leading to possible clutter, occlusion, and hard-to-interpret images. One approach to improve upon this is to generate seeds on a curved surface embedded in 3D, backproject these seeds into 3D, and trace streamlines from these 3D seed positions. Streamlines are next selected for rendering if their screen-space projections are evenly spaced [Li and Shen 07]. As seed surface, slice planes, isosurfaces of various flow quantities such as vector field magnitude, or stream surfaces can be used. A simpler, and visually more effective, solution is to work entirely in image space [Spencer et al. 09]: Given a 3D seed surface S3D, we project the vector field, restricted to S, to the 2D image plane. Next, we apply the 2D evenly-spaced stream generation of [Jobard and Lefer 97] to the projected vector field over the 2D projection S2D of S3D. Additional care, however, needs to be taken to prevent that the generated 2D streamlines stop at the points of S2D where the depth to the image plane exhibits discontinuities, so that streamlines do not “jump” between parts of the surface which are close on S2D but far away on S3D.
Figure 6.16(a) shows the image-based streamline placement of [Spencer et al. 09] for a flow field atop of the surface of a cooling jacket in a car engine simulation. As visible, streamlines appear (nearly) evenly spaced from the considered viewpoint. Figure 6.16(b) refines the basic idea by adapting the inter-streamline distance and the tapered streamlines’ thickness to the distance to the image plane. As such, the image exhibits still an uniform ink density, similar to [Mebarki et al. 05], but the streamline density is an effective cue of the depth.
In addition to plain lines, we can use other graphical shapes to visualize the integral trajectories. A popular choice is stream tubes. These can be constructed by sweeping a circular cross section along the streamline curve computed as described previously. At every streamline point, the cross section is kept orthogonal to the streamline tangent vector. Additionally, we can use a vector glyph at the downstream end of the stream tube to indicate the vector field direction, which is not shown by the plain streamlines or stream tubes. Figure 6.17 demonstrates this technique on our familiar 2D MHD vector field. In both images, around 500 stream tubes capped with vector glyphs are drawn from a set of seed points obtained by regularly subsampling the domain in both directions. The difference between the two images relies in the way we cap the tubes.
In Figure 6.17(a), the stream tubes are integrated forward in the vector field and the caps are, hence, placed at the downstream end of the tubes. In this case, the tubes begin on a regular (uniform) grid and the arrow glyph ends exhibit an irregular behavior, subject to the vector field behavior. In Figure 6.17(b), the stream tubes are integrated backward in the vector field, so the caps are placed at the upstream end of the tube, i.e., the seed points themselves. The tubes appear to begin on an irregular grid (the set of end points of the backward integration trajectories) and they end, with arrow glyphs, on the regular seed point grid. Depending on the actual visualization task, as well as the aesthetic preference of the user, one image may be subtly better than the other.
The thickness, or radius, parameter of stream tubes can be also used to convey some extra information. For example, we can modulate the tube radius to map a scalar value along the stream tubes, such as temperature, density, viscosity, or pressure, but also flow-related quantities, such as vorticity. However, this degree of freedom has some limitations. The visual range of the radius is quite small. There is a lower bound imposed by the difficulty of distinguishing the actual radius of tubes that are too thin, and there is an upper bound beyond which stream tubes would take too much space and (self-)intersect, just as the vector glyphs. An interesting effect can be obtained by modulating both the tube radius and color as functions of the normalized tube length (see Figure 6.18). We use here the same 2D flow dataset as our first streamline example (see Figure 6.14). For every seed point, obtained by regular domain subsampling, we trace a stream tube whose radius varies linearly from some maximum value Rmax to 0 and whose luminance varies from black to white. The obtained effect, similar to the streamline tapering illustrated earlier in Figure 6.15, resembles a set of curved arrow glyphs. The luminance and radius visual cues enhance each other to convey an arguably better insight than the plain streamlines of Figure 6.14.
Choosing an appropriate sampling strategy that solves the coverage, density, and continuity issues well is more critical when tracing streamlines in 3D datasets as compared to 2D datasets. Figure 6.19 illustrates several sampling strategies and parameter settings for a 3D flow dataset. In all cases, we obtain the uniformly spaced seed points by undersampling the uniform dataset (128 × 85 × 42 points) at some given rate in all three dimensions. The streamlines are colored by the velocity magnitude using a blue-to-red colormap. First, we undersample at a rate of 10 × 10 × 10 (see Figure 6.19(a)) and use a maximal streamline length of 100, which is close to the size of the domain’s length. This avoids cluttering but creates a sparse visualization that fails to convey insight in many areas. We can improve coverage by decreasing the undersampling to a rate of 3 × 3 × 3 and use the same maximal length (see Figure 6.19(b)). Due to the increased streamline density, the flow structure becomes easier to follow, at least in the outer zones. The relative spatial continuity in velocity magnitude, mapped to color hue continuity, helps us distinguish coherent flow patterns, such as the high-speed flow inner core (green) and the maximal speed zone located at the outflow (red). However, occlusion becomes a problem. We can solve the occlusion problem as we did for the vector glyphs, i.e., by lowering the transparency of the streamlines to 0.1 (see Figure 6.19(c)). Finally, as a comparison, we use the same 3×3×3 undersampling rate but now trace streamlines until a maximal time of 100 is reached (see Figure 6.19(d)). In this dataset, the velocity magnitude ranges between 0 and 2. However, as we see in Figure 6.19(d), we now obtain many very short streamlines in the low-speed flow areas. This lets us better visualize the flow’s high-speed inner core. This image can be thought as generalizing the vector glyph visualization in Figure 6.8(c): The vector glyphs are actually very short streamlines computed with a single integration step.
Being 3D objects, stream tubes have the extra advantage of providing some shading and occlusion cues, which allow us to better determine their actual relative position and orientation in 3D vector visualizations, as compared to plain streamlines drawn as 1D curves. However, as it is already visible from Figure 6.19, stream tubes are thicker, so they take more screen space, which increases cluttering.
In case we are not interested in a dense coverage of the complete domain with stream objects, choosing the seed area must be done with the same level of care for 3D datasets as for 2D datasets. Figure 6.20 illustrates this for the already familiar 3D flow dataset. Here, we densely sample a circular area close to the flow inlet with 200 stream tubes. The visualization clearly shows how the flow bounces against the invisible obstacle close to the inlet, gets deflected in all directions, next bends to avoid a second obstacle, and finally exits the domain via the outlet. In the last portion, we also detect a separation of the flow into two symmetric twisting patterns that get both sucked by the outlet.
In Figure 6.20, we saw how a dense bundle of stream tubes can be used to gain insight into how a 3D vector field twists around its direction of advection. The visual cues to look for are stream tubes that twist around each other, yet stay close to each other. We can get a similar type of insight using a different visualization technique called stream ribbons. A stream ribbon is created by launching two streamlines from two seed points close to each other. The surface created by the lines of minimal length with end points on the two streamlines is called a stream ribbon. If the two streamlines stay relatively close to each other then the stream ribbon’s twisting around its center curve gives a measure of the twisting of the vector field around the direction of advection.
Figure 6.21 shows two examples of stream ribbons. Both examples trace the ribbons from the inlet region of our familiar 3D flow dataset. In the left image, two relatively thick ribbons are traced. The left ribbon quickly enters a region of high vorticity, as indicated by its twisting. In contrast, the right ribbon stays relatively untwisted until its last portion, where it shows evidence of some moderate vorticity. Both ribbons are colored with the streamwise vorticity using a classical blue-to-red colormap. The two streamlines that form the edge of each ribbon are visualized with stream tubes. As an extra element, vector glyphs are added on the central symmetry curve of each ribbon, to show the advancing direction of the flow.
In the right image, 20 stream ribbons are traced from the same inlet region as in the previous case. In the first portion, the flow is laminar, so the stream ribbons stay connected to each other, forming a stream surface, as we shall see in the next section. This apparent surface is suddenly broken by the impact of the flow with the invisible obstacle situated in the domain. From this location, every stream ribbon evolves separately from the others, the complete set of ribbons being split into two main components. In the last portion, the ribbons exhibit the same general twist pattern that was shown by the stream tube visualization in Figure 6.20.
We have seen in the previous section how stream ribbons, densely seeded on a given curve, can be used to visualize how that curve would be advected in the vector field. This technique can be generalized to compute so-called stream surfaces of the vector field. Given a seed curve Γ, a stream surface SΓ is a surface that contains Γ and is everywhere tangent to the vector field. Given this definition, both stream ribbons and stream tubes can be seen as particular cases of stream surfaces. For stream tubes, the seed curve is a small closed curve such as a circle. For the stream ribbons, the seed curve is a short line segment. Stream surfaces have the intuitive property that the flow described by the vector field is always tangent to the surface, i.e., cannot cross it. Hence, stream surfaces whose seed curves intersect the flow domain boundary or are closed curves can be used to segment the flow domain into disjoint regions that exhibit noninterfering flow patterns. Also, when compared to streamlines, stream surfaces have the additional advantage of being two-dimensional objects, which are easier to follow visually.
Stream surfaces can be constructed in several ways. A simple approach starts by tracing densely seeded streamlines from the seed curve. Next, the traced streamlines are connected to generate the stream surface. This can be done, for example, by connecting each vertex of the actual polylines that represent the streamlines with the closest vertex of any other such polyline. A different approach is to parameterize the streamlines by traveled distance. Points on all streamlines that have the same traveled distance value are next connected. Both approaches essentially parameterize the stream surface along two directions, one tangent to the (advected) seed curve, and one normal to it, in the direction of the flow. Sampling the two directions allows us to construct a stream surface as a quad mesh.
When constructing stream surfaces, one must be careful to correctly treat regions of high divergence, whether positive or negative. In such regions, the streamlines do not run parallel to each other. Detecting such regions can be done, for example, by comparing the distances between the advected seed curve points that are to be connected with a reference value. If the actual distance becomes too small, the streamlines converge, so it may be desirable to remove some of them from the tracing process to prevent too small polygons from being created, reduce computational costs, and increase performance. If the actual distance becomes too large then the streamlines diverge. Such a situation is easily visible in Figure 6.21 at a short distance from the inlet (upper right), where the flow, and hence the ribbons too, abruptly get split into two separate sets. In such a case, we must decide whether we want to explicitly model this as a flow split event or not. If so, then the neighbor streamlines which diverge should not be further connected past the split event, in order to yield a “tear” in the stream surface. Note that such streamlines can converge further. In such a case, they will be reconnected, and this will lead to the appearance of holes in the stream surface. If we do not want to model such splits then extra seed points can be added along the advected seed curve segment whose length exceeds the reference value, in order to maintain a high streamline density, and the generation of the stream surface is continued as usual.
Figure 6.22 shows an example of stream surfaces, traced in a 3D dataset which contains a vector field describing the gas flow in an engine combustion chamber. Color shows vector field magnitude using a rainbow colormap. Figure 6.22(a) shows 100 stream tubes traced from a uniformly distributed set of seed points placed along a seed line, displayed in red. As visible, the streamlines have both convergent and divergent regions. Figure 6.22(b) shows the stream surface constructed from these streamlines. The stream surface starts relatively flat close to the seed line. After a while, however, creases appear in the surface, as streamlines become less parallel to each other, due to the nonuniformities of the vector field. Close to the streamlines’ ends, i.e., in the lower part of the figure, splits appear in the stream surface as several neighbor streamlines become highly divergent. These splits, as well as the regular mesh structure of the stream surface imposed by its construction algorithm, are better visible in the detail image shown in Figure 6.22(c).
A different stream surface approach that avoids the mesh construction challenges outlined above is to define the stream surface implicitly [van Wijk 93]. Given a 3D vector field domain Ω ⊂ ℝ3, we first define a scalar function f∂Ω : ∂Ω → ℝ on the boundary ∂Ω of Ω. f should be defined so that one or several isolines over ∂Ω correspond to our seed curve(s) of interest Γ. Next, from each point x of the discretization of Ω, a streamline S is traced backwards in the vector field. If S hits ∂Ω at some point y after a given amount of time, the value f(y) is assigned to all points along it in Ω. After all points of Ω have been thus visited, we obtain a scalar field fΩ : Ω → ℝ. Next, the desired stream surface corresponding to Γ can be easily extracted as an isosurface of fΩ corresponding to the value that Γ takes over ∂Ω, using for example the marching cubes algorithm (Section 5.3.2). Although this method is simple to implement, and delivers at once a family of stream surfaces for different isovalues rather than a single such surface, it cannot extract stream surfaces which do not pass through the boundary of the dataset.
For time-dependent vector fields, streak surfaces generalize the notion of streak-lines in the same way that stream surfaces generalized the notion of streamlines. Given a seed curve Γ, a streak surface SΓ is a surface that contains all massless particles that pass through Γ at different time moments, or, in other words, the (time-dependent) locus of dye advected in the flow that originates at Γ.
Conceptually, streak surfaces can be computed using a similar approach to stream surfaces: Streaklines are traced in the flow starting from a densely sampled representation of the seed curve Γ. The streak surface is then constructed by connecting streaklines for adjacent seeds to yield a quad mesh. However, apart from the additional challenges posed by highly divergent and/or convergent flow regions already outlined for stream surfaces, streak surfaces add the complexity that each surface sampling point needs to be integrated, or advected, at each time step as the vector field changes. This is computationally much more demanding than tracing stream surfaces for time-independent vector fields. Additionally, due to the high dynamics of streak surfaces, the sampling problems due to vector field convergence or divergence can occur at any point on the surface, rather than around the advancing front represented by the end points of the traced streamlines, as is the case for stream surfaces. This requires additional care in the generation of the mesh representing the streak surface. These challenges make streak surface implementations to be less frequently found in flow visualization software.
A relatively simple-to-implement algorithm for constructing streak surfaces as quad meshes from a set of streaklines, including the robust handling of highly divergent and convergent flow regions to generate a good quality mesh, is presented in [McLoughlin et al. 10]. Figure 6.23 shows a streak surface computed with this method, drawn from a seed line Γ in a time-dependent flow. The surface is colored by the velocity field magnitude using a rainbow colormap. As visible, the method can accommodate the computation of the complex emerging surface that exhibits tearing as the flow moves around the obstacle.
So far, most of the presented vector visualizations used discrete objects, such as vector glyphs, streamlines, or stream surfaces. By their very nature, discrete visualizations cannot convey information about every point of a given dataset domain, as we have seen on several occasions. The interpolation of attributes between these discrete objects must be done visually by the user. In contrast, dense visualizations such as color plots present the user with a (piecewise) continuous signal that can be easier to interpret. The question arises how we can do this for vector fields.
Texture-based visualizations are an answer. The idea is to create a texture signal that encodes the direction and magnitude of a vector field in the various texture parameters such as luminance, graininess, color, and pattern structure. The main challenge here is how to encode the vector direction in the texture parameters. An intuitive and effective idea is to use the texture graininess for this. Since their appearance at the beginning of the 1990s [van Wijk 91, Cabral and Leedom 93], several methods have used this principle to produce visualizations of vector fields.
To understand the basic principle, consider an input texture N consisting of small-scale black-and-white noise defined over the domain of the 2D vector field we want to visualize (see Figure 6.24). For each pixel p of this domain, we trace a streamline S(p, s) upstream and downstream through p for some maximal distance L. Here, s parameterizes the streamline. Next, we set the value T(p) of the output texture T at the current location p to be the weighted sum of the values of the input noise texture N measured along the streamline S(p) that passes through p. As a weighting function k(s) : ℝ → ℝ+, we can use a Gaussian k(s) = e−s2, or other functions that are 1 at the origin and decay smoothly and symmetrically from the origin until they reach near-zero values at the maximal distance L.
The obtained value T(p) is
The denominator in Equation (6.14) normalizes the weight factors for an arbitrary value of L.
Intuitively, we can think of this process as blurring, or filtering, the noise image along the streamlines with a set of filters k(s) that are aligned with the streamlines. As we shall see in Chapter 9, the filtering operation described by Equation (6.14) can be seen as a convolution of the noise and filter functions N and k. Hence, this process is also known in the visualization field as line integral convolution (LIC) [Cabral and Leedom 93, Rezk-Salama et al. 99].
If we apply Equation (6.14) for all pixels using streamline lengths L of several tens of pixels, we obtain a texture T whose pixel colors exhibit little variation along a streamline, due to the strong blurring, but show strong variation between neighboring streamlines, due to the similar strong variation of the initial noise. Although they can be used for any vector field, texture-based vector visualization methods have been mainly developed in the context of visualizing vector fields describing fluid flows.
Figure 6.25 shows the results of applying the line integral convolution algorithm described above to a simple synthetic vector field.5 The left image shows the noise texture N used. The right image shows the resulting LIC texture T for an integration length L equal to 5% of the domain size. Using smaller values for L creates textures increasingly similar to the input noise. Using larger values for L increases the length of the perceived snake-like texture patterns.
In addition to the LIC algorithm, many other algorithms use textures to visualize vector fields. Many of them generate similar textures to the LIC method, which are coherent along various types of flow lines, such as streamlines or streak-lines, and show a high noise-like variation between neighboring flow lines. This basic principle set aside, the specific algorithms differ in many respects, such as the type of vector field they can depict (stationary or time-dependent), the dimension of the depicted domain (planar, curved surface embedded in 3D, or volumetric), whether they generate a still or an animated texture, and the actual implementation used to create the texture. We point the interested reader to a survey article on this topic [Laramee et al. 04].
A particularly attractive algorithm for texture-based vector visualization is the Image Based Flow Visualization method, or IBFV [van Wijk 02a]. IBFV and its variants [van Wijk 03, Telea and van Wijk 03] produce not just static, but animated, flow textures in real time, can handle both stationary and instationary fields defined on domains ranging from planar 2D to volumetric ones, and are quite simple to implement.
We next detail the IBFV method that operates on 2D planar domains, which is also the simplest to implement in its class. The principle of the method is sketched in Figure 6.26. To understand IBFV, consider a time-dependent scalar property I : D × ℝ+ → ℝ+ such as the image intensity, defined on a 2D domain D. The value I(x, t) → [0, 1] describes our property at a given point x ∈ D of the flow domain D at a time moment t. The advection in time of the property I in a vector field v : D × ℝ+ → ℝ2 is given by
This process is sometimes called forward advection, as it states that the property I at a location x + v(x, t)Δt downstream and at a future moment t + Δt is equal to the current property I(x, t) at the current moment t. In contrast, the backward advection expresses the property I at a location x at the current moment t as a function of the property at an upstream location x′ = x − v(x′, t − Δt)Δt at a previous moment t − Δt. The time step Δt discretizes the time and allows us to solve Equation (6.15) iteratively. As stated earlier, we would like to advect a noise texture so that we obtain an image with low contrasts along a pathline and high contrasts across neighboring streamlines. However, if we simply advect an initial image I(x, 0) = N(x), where N : D → [0, 1] is a noise texture like the one shown in Figure 6.26, different points in the flow domain placed on the same pathline will “overwrite” each other’s property values I in different ways, depending on the order in which we solve Equation (6.15) for the different points. Moreover, we would also like to solve the question of how to create an animated flow texture.
These goals can be met if we add a so-called injection term to Equation (6.15). Intuitively, this term can be seen as ink or dye that is injected into the flow domain at every point x in space and moment t in time. The combined advection and injection process is described by
Here, N(x, t) describes the injected property, which is also a function of space and time. The parameter α ∈ [0, 1] controls the ratio of advection to injection. A value of α = 1 states that there is no advection, so our property I simply equals the injected signal N. A value of α = 0 states that there is no injection, i.e., yields the pure advection Equation (6.15). Setting α to a value between 0 and 1 yields an image that exhibits both local variation (due to the injected noise) and also coherence along pathlines (due to the advection). Good values in practice are α ∈ [0, 0.2].
There remains the question of which noise texture N(x, t) to inject. Let us first consider a time-independent signal N(x). To achieve a high spatial contrast, neighboring pixels should have different colors. We can achieve this by setting N to a random signal consisting of black and white dots, as shown in Figure 6.26. The size d of the dots should be correlated with the velocity magnitude. In practice, using a dot size d ∈ [2, 10] pixels gives good results.
Applying Equation (6.16) for a few steps results in flow textures such as the one shown at the right in Figure 6.26. However, we can do better than producing static flow images. For this, let us take a time-dependent noise texture N′(x, t) that is obtained from our original stationary noise texture N(x) as
Here, f : ℝ+ → [0, 1] is a periodic function with period 1. Intuitively, Equation (6.17) says that the intensity of every pixel x of the time-dependent texture N(x, t) oscillates in time controlled by the periodic function f, but all pixels have different (random) initial phases controlled by the static noise N(x). This is illustrated in Figure 6.27 for a one-dimensional time-dependent noise signal N(x, t) for a sinusoidal function f. Using N′ instead of N in Equation (6.16) produces an animated texture that seems to move with the flow, an effect that is especially suited for visualizing time-dependent vector fields v(x, t). The implementation of both the stationary and time-dependent IBFV methods is described later in this section.
We have described a method to produce dense flow textures by advecting and injecting noise textures. We shall now sketch a simple and effective way to implement this process using OpenGL. For a detailed discussion, we refer to the original IBFV publication [van Wijk 02a]. All notations here and in the implementation code (see Listing 6.2) refer to the advection-injection Equation (6.15), which is at the core of the IBFV method.
The implementation uses in total NOISE + 1 textures called tex[0]
, tex[1]
,...,tex[NOISE]
. The first NOISE textures encode the time sampling of one period of the noise signal N(x, t). We use here NOISE = 32 samples, which is typically enough to capture the periodic behavior of the function f in Equation (6.17). In the following example, we set f to a simple step function. Other functions can be used, such as an exponential decay, a sawtooth, or a sine wave. The last texture tex[NOISE]
is a work buffer into which the image I(x) is constructed. The complete process consists of an initialization step (function init()
in Listing 6.2) followed by an endless loop consisting of three steps: advection, noise injection, and construction of the work texture (function run()
).
The advection itself (see Equation (6.15)) is implemented using an OpenGL polygon mesh (function advect()
). We consider the 2D flow domain D to be covered by a set of polygons {Pi}i=1..N that have the 2D vertex coordinates {xij}j=1..n(i) and {yij}j=1..n(i), where n(i) is the number of vertices of Pi. If D has a rectangular shape, we can use a uniform grid of quadrilaterals Pi with equally spaced vertices, in which case n(i) = 4, ∀i. Generally, we can use an unstructured grid, as discussed in Section 3.5.4. The advection is done by drawing a polygon mesh whose vertices are slightly warped by the vector field v, textured with the work texture. This deforms, or warps, the work texture in the vector field direction and saves the result into the frame buffer.
After advection, noise injection is implemented by cyclically blending one of the textures tex[0]
,...,tex[NOISE-1]
, which encode the time-dependent noise signal N(x, y), on top of the warped texture (function inject()
). This is done by drawing one large quad with corners (0, 0); (1, 1) that covers the complete frame buffer, textured with the selected noise frame. The convex combination controlled by the factor α in Equation (6.16) is implemented by enabling OpenGL’s alpha-blending mechanism (line 28). The α value is encoded in the textures’ alpha channel (line 63) alongside the luminance channel that encodes the noise signal N (line 62).
After noise injection, the frame buffer contains the image I(x, t +Δt) for the next time step, i.e., the left side of Equation (6.16). Since we use this image in the right side of the equation in the next step, we copy it into the work texture tex[NOISE]
(line 80). Next, the animation loop repeats.
Finally, let us discuss the various parameters involved. The work texture and frame buffer are both of size ISIZE×ISIZE pixels. The noise textures are typically smaller, taking NSIZE × NSIZE pixels where NSIZE < ISIZE, thereby saving considerable memory. Given this, OpenGL both stretches and replicates the noise texture to cover the entire frame buffer area. The stretch factor NSPOT controls the actual size of the noise spots in the visualization. As explained before, good values are in the range of a few pixels. Similarly, the warping vΔt of the grid vertices should be small enough so that the grid polygons are not too badly distorted. On the other hand, the warping should be large enough so that the advection is visible, i.e., it is at least a few pixels. If we are not interested in visualizing differences in velocity magnitude across the flow domain, the simplest solution to the warping issue is to normalize v. If, however, we allow varying values for |v| on the flow domain, we can clamp vΔt for every warped point separately. Finally, if we are interested in still flow textures similar to the ones produced by the LIC method [Cabral and Leedom 93] rather than animations, we can execute a few tens of iterations of the main loop using a single noise texture (NOISE = 1) and obtain the desired image in the work texture.
Listing 6.2 covers less than two pages or under 100 lines of code and provides an almost fully functional IBFV program. For brevity, we omit the actual OpenGL implementation of the polygonal mesh {Pi} and various bits of data scaling and OpenGL initialization code. The implementation of the IBFV method is described in further detail in [van Wijk 02b].
Figure 6.28 shows two examples of two 2D flow datasets visualized with the IBFV method. In the left image, the generated flow texture is encoded in the luminance channel, whereas the hue shows the vector magnitude via a blue-to-red colormap. In the right image, the luminance of the flow texture is directly modulated by the vector magnitude, so bright areas indicate high-velocity regions. As we can see, these static snapshots of the IBFV animation have a quite similar look to the results of the LIC method (see Figure 6.25(b)).
This example also illustrates more of the possibilities of texture-based visualizations. In addition to the gray noise texture itself, three red, green, and blue disc-shaped “ink sources” are advected in the flow field. If we implement the visualization using the IBFV method, the advection of the ink sources is done by simply drawing them on top of every frame after drawing the noise texture (step 77 in Listing 6.2). The position of the ink sources is shown in the image by white circles. The trace left by the advected colored ink is the texture-based equivalent of a dense set of streamlines seeded at the ink sources. However, there are two differences. Streamline tracing from a finite point seed set would produce a set of dense, yet distinct geometric primitives of constant opacity. In contrast, the texture-based ink advection produces a continuous, gradually fading image.
Both methods have their specific strengths and focuses. Streamlines are often used in exact engineering applications where one wants to accurately determine the trajectory of a point starting from a precisely specified seed location. Ink advection is useful in getting insight into how the flow from a given seed area spreads out over a larger domain. Placing differently colored ink sources at the sources of a flow field, i.e., the points of high positive divergence (see Section 6.1) and letting them get advected until the process visually converges will produce an image showing the flow domain decomposed into several spatial components colored differently as a function of their corresponding source. The curves that separate these areas are called separatrices and are important advanced concept in the study of vector fields.
Texture-based visualizations are not limited to 2D planar domains. Figure 6.29(a) shows a texture-based visualization on a 3D surface. The actual surface is an isosurface of the vector magnitude of the 3D flow dataset presented earlier in this section (see, e.g., Figure 6.19). On this isosurface, we first project the vector field and next use the same type of texture-based vector visualization as in the 2D case. Shading is added to the texture on the surface in order to covey spatial cues about the object geometry. Since this is an isosurface of the velocity magnitude, it would be useless to color-code the vector magnitude as we did in the 2D case (see Figure 6.28). However, in practice, other available scalar attributes can be shown via colors, such as pressure, temperature, vorticity, or divergence.
Finally, we note that texture-based vector field visualizations can be also applied to volumetric datasets. Figure 6.29(b) visualizes a simple 3D helicoidal flow with the 3D equivalent of the IBFV texture-based method discussed previously for 2D images [Telea and van Wijk 03]. Similar to the 2D visualization shown in Figure 6.28(b), several colored ink sources are used to complement the grayscale noise. The 3D IBFV method follows a similar advection-injection principle to the method for 2D flat and 3D curved surfaces, with a few notable differences. First, producing a dense texture-based visualization requires using either 3D textures, which are still less widely supported by graphics cards than their 2D counterparts, or stacks of 2D textures. Second, the advection step, which is directly supported in OpenGL by drawing warped and textured polygons for the 2D cases, has no 3D counterpart. Indeed, there is, at the current moment, no 3D (volumetric) drawing primitive in OpenGL, and this situation will probably persist for a while. Implementing the advection step in 3D can be done by reducing it to a stack of 2D primitive renderings [Telea and van Wijk 03]. Finally, to be able to see through a 3D dense flow texture, the injection step is modified in the 3D case by adding alpha, or transparency, noise to the grayscale noise. The alpha noise plays the role of “erasers” injected in the flow volume, yielding a sparsely filled visualization that lets one see through the flow domain (see Figure 6.29(b)).
To visualize the result of the 3D IBFV method, we have to render the resulting texture volume, which contains both color and alpha (transparency) values. This can be done using a visualization technique called texture-based volume rendering, which is described separately in Section 10.4.
In the previous section, we saw a number of visualization techniques for vector fields, ranging from seed-based methods such as vector glyphs and streamlines to methods that produce dense representations, such as the texture-based techniques. The effectiveness of all these methods depends largely on their ability to convey the desired insight into a given dataset. In many applications, this does not mean visualizing all the data points in the same way. Regions that exhibit important characteristics for an application area, such as vortices, speed extrema, or separation lines between regions of laminar flow, should be visualized in different ways as compared to the less-important regions, in order to help users detect their presence in a dataset. Such regions are also called features of the vector field.
One of the main reasons for this selective visualization of vector fields is the sheer size of the data. As we have seen in the case of 3D fields, dense visualizations can be quite hard to interpret due to occlusion. Designing visualizations that use a simplified version of the vector field is beneficial for large datasets if the simplification keeps (and emphasizes) the features of interest and removes a large amount of uninteresting data. Also, if we know in advance where such features are in a dataset, we can place visualization primitives at those locations, such as vector glyphs or stream objects, instead of doing a dense sampling of the whole domain. Several visualization methods exist which take this approach, as described in the following sections.
Consider the 2D vector field in Figure 6.30(a), displayed using the IBFV technique. This vector field is synthetically generated by combining the effects of two sources (shown in red) and two sinks (shown in blue). Similarly to the application illustrated in Figure 6.28(b), we ask ourselves: Which are the regions of the 2D flow domain from where the flow is gathered by a sink, respectively which are the regions reached by flow from a source? We could compute such regions by manually placing ink sources at the locations of the vector field sources, and using IBFV to display the regions reached by these ink sources. Figure 6.30(b) illustrates this. We see here how the purple ink, placed at source 1, and the yellow ink, placed at source 2, get sunk into both sinks. However, this method does not fully answer our question: First, there are field regions (displayed in white) which are not reached by any ink source. One of the reasons for this is that the IBFV computations are restricted to a finite image—that is, ink cannot exit the computational domain and re-enter it through a different area. Another reason is that regions can exist which are only sunk into a sink, but not flooded by any source. The second problem of our proposed solution is that we cannot differentiate between ink flowing from a given source to two or more different sinks. In other words, we can show the regions of influence of sources, but not of sinks.
We can partially solve this problem by manually adding more ink sources over the entire flow domain, taking care to use the same ink color for all points which flow into the same sink and also for all points that emerge from the same source. Figure 6.30(c) shows the result of this process. In contrast to Figure 6.30(b), we can now better see the structure, or topology, of the flow. We distinguish four regions:
purple points flow from source 1 into sink 1;
red points flow from source 1 into sink 2;
yellow points flow from source 2 into sink 1;
green points flow from source 2 into sink 2.
Within each such region, the flow is uniform, in the sense that it has the same source and destination (sink). We also notice a particular flow structure around the point marked with an asterisk: Here, four different flow regions meet, but we do not have a source nor a sink.
However, computing such a decomposition of the flow domain into separate regions by manual ink source placement is, at best, tedious, and is highly error-prone. Moreover, the borders between flow regions are not explicitly computed.
We can obtain a similar, but higher-quality, result by analyzing the mathematical topology of the vector flow, as follows. Consider the vector field defined by a function v : D ⊂ ℝ2 → ℝ2. First, we detect all so-called critical points of the field, i.e., points where the vector field magnitude ||v|| is zero. Assuming that the vector field is continuous, these will include all sources, sinks, and a number of additional point types, which we discuss next. To discriminate between the different types of critical points, we compute the Jacobian matrix of the field v defined as
J is computed at each point x ∈ D. Next, we evaluate the eigenvalues and eigenvectors of J. For a detailed description of eigenvalues and eigenvectors and the way to compute them, we refer further to Section 7.1. It can be shown that, since J is not a symmetric matrix, its eigenvalues λ1 and λ2 are not necessarily real numbers. Let us denote these as λ1 = (Re1, Im1) and λ2 = (Re2, Im2), where Re and Im stand for the real, respectively imaginary, components of the respective complex numbers. It then can be shown that a vector field has the six types of critical points illustrated in Figure 6.31, which can be discriminated from each other based on the signs of the real and imaginary parts of λ1 and λ2 [Helman and Hesselink 89, Helman and Hesselink 91]. Intuitively, we see that negative values of Re describe attraction, while positive values of Re describe repulsion, while nonzero values for Im describe rotation.
After having classified critical points, we consider now the small spatial region surrounding such a critical point c. It can be shown that such a region can be divided into several compact sectors Si of three types, based on the behavior of streamlines which pass through that sector:
Parabolic sectors: Streamlines in Si have one end at c.
Elliptic sectors: Streamlines in Si have both ends at c.
Hyperbolic sectors: Streamlines in Si do not pass through c.
The streamlines which separate such sectors, or in other words constitute the sectors’ borders, are called separatrices. These are precisely the (exact) borders of the colored regions shown in Figure 6.30(c).
The next question is how to actually compute these separatrices. To do this, we first observe that saddle points are a special case of all critical points shown in Figure 6.31: At a saddle point, exactly four separatrices meet. It can also be shown that, at a saddle point, separatrices are tangent to the two eigenvectors of J, and that the outgoing and incoming separatrices are tangent to the eigenvectors having positive, respectively negative, Re components [Helman and Hesselink 89].
The algorithm for computing separatrices from a given 2D field thus can proceed by linking saddle points to other streamline endpoints, as follows:
Compute and classify all critical points following Figure 6.31.
For all saddle points s, start tracing two streamlines in the directions of their eigenvectors having positive Re values.
Stop tracing when the streamline hits the boundary of D or another critical point where ||v|| = 0.
Merge curves that start and end between the same endpoints.
The output of this algorithm is a graph whose nodes are critical points or points where streamlines leave the computational domain, and edges are the respective separatrices. Figure 6.30(d) shows the result for the vector field illustrated earlier.
Robust implementation of the above algorithm, however, requires several precautions, as follows.
Detecting critical points firstly needs finding locations where ||v|| = 0. However, the vector field v is usually point-sampled on a discrete grid. As such, we must use interpolation of cell vertex values to find the exact points where the magnitude of v vanishes. To do this, we can use the interpolation mechanisms presented in Chapter 3. The same interpolation mechanisms must be used to estimate the components of the Jacobian matrix J at the detected critical points (see Section 3.7).
Several critical points are usually excluded from computing separatrices. First, center points are typically eliminated, since they do not contribute to global information on the topology of a vector field, but only describe small-scale local circular (vortex-like) areas. Second, since all our computations are done using interpolated data, the conditions on the real and imaginary components of eigenvalues of the Jacobian J (Figure 6.31) are naturally subject to numerical errors. In practice, these conditions are relaxed to compare Re and Im with finite-size, nonzero, threshold values. Finally, complex vector fields can generate hundreds of critical points. Including all of these into a topological visualization unnecessarily complicates the final result. A straightforward way to simplify such visualizations is to cluster, or merge, critical points which are close to each other within a small spatial distance.
Any sampled vector field will have a domain boundary. This imposes two refinements of the separatrix computation algorithm. First, we need to make sure that streamline tracing stops when hitting such boundaries, as in the standard streamline tracing algorithm (Section 6.5.1). Second, in certain datasets such as computational flow dynamics simulations of fluid flow in a vessel, vector field magnitude will be (close to) zero on boundaries (see example in Figure 6.8 and related figures). These points do not have to be considered critical points from the perspective of separatrix computation.
Topological visualizations of vector fields are very effective instruments for reducing the size and complexity of vector datasets to their “essence” expressed in terms of uniform flow regions bounded by separatrices. However, as we have seen, computing such simplifications is a delicate process involving several challenges related to numerical estimations of derivatives of sampled data. For 3D datasets, separatrices become collections of curves and surfaces which partition the flow domain, and which require even more involved computation methods. For non-trivial vector fields, the question whether such abstract representations, and the associated mathematical background required to understand and use them, are more effective than classical glyph-based, streamline, or image-based visualizations is still open. For more information on topological visualizations of vector fields, we refer the reader to several survey papers in the field [Laramee et al. 07, Pobitzer et al. 11].
Feature detection methods reduce the vector field to a set of features of interest, described as a set of parameters, such as feature type, position, extent, and strength. Features can be defined either analytically by a feature energy-like function, in which case the detection method tries to find parameter combinations that maximize this function, or as a set of examples or patterns, in which case the detection method searches for best matches in the dataset of these examples. One of the features of interest in flow visualization is vortices. Although for most of us what a vortex looks like is quite intuitive, robustly quantifying and detecting one is surprisingly difficult in practice. Many methods for vortex detection have been designed over the last 15 years [Banks and Singer 95, Jankun-Kelly et al. 06]. For an overview of this topic, we point the reader to existing review papers [Post et al. 02, Post et al. 03b].
Although feature detection methods can give good results, they do not come without problems. First, it is hard to define precise numerical criteria to detect such features. For example, it is difficult to precisely quantify the presence and/or extent of a vortex. Second, features appear at different spatial scales in vector fields. In time-dependent fields, there exist also different temporal scales. Finally, there is often no clear spatial separation between a feature and a non-feature area or between several features of the same or different types.
The third class of methods is formed by field decomposition methods. Such methods partition the vector dataset into regions of strong intraregion (internal) similarity and weak interregion similarity. At the core of such methods is a similarity metric f that defines how similar two regions are. Different similarity metrics lead to different decompositions that model different end goals or questions. A frequently used similarity metric compares the direction and magnitude of the vector data. Two regions are considered highly similar if the vector field has the same orientation and magnitude on both regions, and dissimilar otherwise. After choosing the metric, decomposition methods usually perform a top-down partitioning or bottom-up agglomerative clustering of the dataset. In the top-down case, the dataset is recursively split in regions that have the least similarity. In the bottom-up case, we start with each sample point (or equivalent small area) being a different region and iteratively cluster the most-similar regions. Both methods produce a multiscale representation of the vector dataset that can be seen as a tree with the smallest regions, or data samples, as leaves and the whole dataset as the root region. A desirable constraint for decomposition methods is to produce spatially compact regions, since these are easier to visualize and interpret, and also map well to the “natural” idea of what a vector field feature is. For top-down methods, this constraint can be added in the splitting step. For bottom-up methods, the constraint can be added either in the similarity metric (non-neighboring regions have minimal similarity) or the merging logic (non-neighboring regions are never merged).
An example of a top-down decomposition method is the technique by Heckel et al. [Heckel et al. 99]. Initially, all data points bearing vectors are placed in a single cluster. Next, this cluster is repeatedly subdivided using a weighted best-fit plane so that the variance of an error metric over its two children is minimized. For each cluster, a representative vector is computed as the average of the vector samples in that cluster. The error of a given split is defined as the difference between streamlines traced in the original field and the simplified one given by the representative vectors. This method produces a vector field simplification that directly encodes elements of the visualization itself: a “good” simplification is that which produces streamlines close to the original field.
Many bottom-up clustering methods exist, and many are based on work on discrete and continuous data clustering performed outside of the visualization domain [Jain and Murty 99]. A simple bottom-up strategy is to use a greedy approach. We define a region as the triplet R = (P(R), v(R), O(R)). Here, P(R) = {pi}i is the set of sample points, or support, of the region, which is compact from the perspective of the point neighbor relationship. The simplest way we can represent the vector field over P(R) is by a single vector v(R) located at an origin O(R) inside the area covered by P(R). We begin the clustering by defining, for every point pj of the dataset, a region Rj = (pj, v(pj), pj) containing the point and its vector attribute that has the origin at the point itself. The complete region set ℛ = {Ri} is next simplified iteratively by clustering, at every step, the two most-similar regions Ra ∈ ℛ, Rb ∈ ℛ to form a new region Rab. This new region contains the union of all points in Ra and Rb and a vector v(Rab) that should best reflect the merging of Ra and Rb. A simple way to compute v(Rab) is as the average of v(Ra) and v(Rb) weighted by the number of points |P(a)| and |P(b)| in the merged regions. The origin O(Rab) of v(Rab) can be computed similarly to the computation of v(Rab) by weighted averaging. This technique is also known as bottom-up agglomerative clustering with average linkage, since a cluster is represented by, and compared with other clusters with, a single average vector. Finally, we set Ra and Rb as children of Rab, so that a binary region tree, or dendrogram, gets constructed during the clustering. The complete algorithm is described in Listing 6.3.6
From the initial ℛ′, the clustering creates region sets ℛ1,..., ℛF until reaching a final region set ℛF that contains a user-specified number of N regions.
These regions are the roots of several region trees that encode the clustering, as described previously. We can use these trees to obtain several types of simplified representations of our vector field. A simplified representation, or s-rep, is a region set ℛ = {R1,...,Rk} of k disjoint regions where each Ri is a node in a different region tree described earlier and whose union
Having such a s-rep, a visualization of the vector field can be produced by displaying the representative vectors for all regions, streamlines seeded at the region origins and clipped by the region boundaries, or the regions themselves colored in different colors (the last option is effective mainly for 2D domains). By controlling N, one can answer the question “show a vector field with N curved arrows.” Figure 6.32 shows such visualizations for 2D and 3D vector fields. The similarity metric used favors vectors with the same direction and magnitude, which explains the elongated shapes of the regions shown in Figure 6.32(a). In Figure 6.32(b), arrow-capped stream tubes are drawn for every region and an additional horizontal slice plane, textured with a spot noise visualization of the vector field, is used to provide an extra visual cue.
Several s-reps can be created from the region trees. The easiest is to use any of the region sets ℛi constructed during the clustering. However, choosing which level i to look at is not very intuitive, as we do not know how many regions ℛi has. Other options for constructing the s-rep are to take regions at a user-given depth from the root in the region trees or an s-rep containing a user-specified number of N regions. Since the decomposition is saved in the region trees, dynamically changing the decomposition level-of-detail and associated visualization can be done in real time, which encourages interactive exploration.
From a reconstruction perspective, the bottom-up clustering method using one representative vector per region is roughly equivalent to a piecewise constant interpolation. Every region support P(Ri) can be thought as the support of a vector basis function ϕi = v(Ri) equal to the region’s representative vector over P(Ri) and zero outside P(Ri). The hierarchy of simplified representations Ri, 1 ≤ i ≤ F can be thought as generating a hierarchy of constant bases
The idea of producing a hierarchy of bases that approximates a given vector field at several levels of detail can be taken further by using more sophisticated clustering techniques. One such technique employs a state-of-the-art mathematical tool called the algebraic multigrid (AMG) [Griebel et al. 04]. In brief, this method works as follows. Given a vector dataset that has an underlying grid with n sample points p1,...,pn, we define a so-called coupling matrix M = {mij}1<i,j<n. Given two points pi and pj, the entry mij essentially encodes the similarity metric f between the vector values v(pi) and v(pj) as follows:
Intuitively, the entries mij of the symmetric square matrix M can be thought of as couplings of the grid points in the vector field. Neighboring points that have similar vectors are strongly coupled; neighbors that have dissimilar vectors are weakly coupled. Points that are not neighbors are not coupled at all. The diagonal entries mii describe the so-called self-coupling, which is set so that the sum of couplings of a point with all other points is 1.
For the more formally and mathematically oriented readers, the preceding matrix M describes the finite element discretization of an anisotropic diffusion operator using piecewise linear basis functions. Let us detail this. Given some domain D ∈ ℝn, the equation
describes the diffusion in time of a scalar function u : 𝔻 × [0, ∞) → ℝ starting from an initial value u(t = 0). In Equation (6.20), div denotes the divergence operator defined by Equation (6.1). The speed of diffusion, or diffusivity, can have different values in different directions in the domain D, a property which is called anisotropy, and which is described by the tensor A. The discretization of the operator div(A∇u) produces our matrix M. In finite element terminology, M is called a stiffness matrix, a term that suggests the metaphor of coupling of the grid points.
The matrix M encodes the vector field structure on the finest level given by the dataset grid. Our aim is to simplify this structure in order to visualize the field at various levels of detail. This can be done using the AMG technique, as detailed in [Griebel et al. 04]. Given a matrix M, AMG constructs a sequence of matrices M0 = M, M1,...,Mk. The size (number of rows or columns) of each matrix Mi is a fraction s of the size of the previous matrix Mi−1, starting with the size S of the matrix M0 until the final matrix Mk, which has size 1, i.e., is a scalar value. Hence, the sequence has k = logs S levels. This reduction in size is done by eliminating matrix entries involved in weak couplings and merging neighboring entries involved in similar strong couplings. This process is similar to the bottom-up agglomerative clustering described in Listing 6.3. However, actual implementations of the AMG technique are relatively complex, as these are capable of producing high-quality clusterings of huge matrices of tens of millions of entries in minutes on a normal PC. Designing an effective and efficient AMG algorithm is difficult, both from the mathematical and implementation points of view, and this topic is treated extensively by a separate field of research [Trottenberg et al. 01, Griebel and Schweitzer 06]. Luckily, for our vector field simplification perspective, we can use an existing AMG implementation as a black box.
Following the finite element paradigm, for every matrix Mi produced by the AMG we can construct a basis Φi containing as many basis functions
We can use these basis functions to visualize the vector field in several ways, as described next. Given a simplification level 1 < l < k, we can use the basis
Since
A second way to use the AMG simplification results for our aim is to visualize the basis
Consider again the original fine-scale IBFV noise N(x, y, t) that was sampled on a n × n image. Every pixel of this image can be seen as being represented by a constant basis function ψi which is 1 over that pixel and zero elsewhere. Hence, the noise term from Equation (6.16) can be rewritten as
Here, f(t) is exactly the same periodical function as in Equation (6.17), whereas Ni is the noise phase of the ith region corresponding to ψi, which is equivalent to the per-pixel noise phase N(x) in Equation (6.17). An interesting observation is that now the basis functions ψi are similar to the AMG basis functions
Here, we use the basis functions on any level 1 < j < k of the AMG decomposition. As we see in Figure 6.33(a), these functions have shapes that follow the vector field, so advecting them in the field, as done by the IBFV method, considerably reduces the appearance of noise-like visual artifacts. In terms of implementation, Equation (6.23) can be efficiently coded in terms of OpenGL imaging operations. The basis functions
By replacing the initial fine-scale noise N(x, y, t) with any of the multiscale noise signals N j(x, y, t) defined by Equation (6.23), we obtain a multiscale image-based flow visualization, or MIBFV method. MIBFV keeps all the strong points of its predecessor (dense field representation, real-time animation, intuitive representation, simple implementation) and adds a spatial multiscale aspect that emphasizes the vector field features at a user-chosen scale k. Figure 6.36 shows the difference between the fine-scale IBFV (Figure 6.36(a)) and MIBFV on three coarse scales (Figures 6.36(b–d)). MIBFV can be used also to combine several scales in a single visualization. The coarse-scale MIBFV images show less detail but exhibit a higher contrast than the IBFV visualization, an element which helps users discern the global coarse scale features of the depicted vector field. Figure 6.36(c) shows an MIBFV coarse-scale context visualization in the background blended with four IBFV fine-scale detail visualizations centered and covering the extents of the four main vortices of the considered flow field. The centers of the IBFV visualizations are marked by red dots, which indicate points of interest chosen by the user. This image is a typical example of the use of focus-and-context techniques in data visualization: high detail is shown over a focus region, typically specified by the user, surrounded by a context region showing a low amount of detail.
MIBFV is less suited to visualize time-dependent vector fields than IBFV. If the vector field changes, we must re-run the AMG decomposition and noise texture computation (see Equation (6.23)) after every time step. Even a very efficient AMG implementation still needs seconds to tens of seconds to produce the multiscale basis functions, so MIBFV cannot provide interactive frame rates on time-dependent vector fields.
Using the AMG technique to produce simplified representations and visualizations of vector fields is an advanced subject that requires an implementation effort way beyond that of computing streamlines or vector glyphs. However, this topic provides good insight into the recent possibilities of vector field visualization and illustrates once again the mix of different techniques such as mathematical analysis, imaging, and graphics that are needed to tackle this great challenge of getting insight in complex, time-dependent vector datasets.
The use of the AMG method for decomposing vector fields for visualization purposes is a typical example of a number of recent techniques that can be used to simplify the massive amount of data present in large vector datasets in order to create visualizations that emphasize the most salient elements of a given vector field. Several other techniques take a similar path in clustering, or merging, similar sample points in a vector field in order to reduce the data complexity and simplify its visual interpretation. A set of techniques closely related to the AMG method presented here employs diffusion-based clustering of vector fields [Preusser and Rumpf 99, Bürkle et al. 01]. These methods solve the diffusion problem encoded by Equation (6.20) starting with an initial noise scalar value u(t = 0) similar to the advected noise N(x, t) of the IBFV method. After a certain time t, small-scale noise patterns get clustered in the direction of the vector field, while still exhibiting high noise-like contrast in the orthogonal direction. These methods create vector field visualizations that are very similar to the MIBFV method (Figure 6.36(b–d)).
Simplified visualizations, such as those presented in Section 6.7, can reduce the amount and complexity of information displayed when visualizing a vector field. As shown, such techniques can be valuable especially for 3D datasets. However, as already noted, they can also lead to abstract images which can be hard to understand by end users. Simpler methods, such as the stream objects described in Section 6.5, generate images which are easier to recognize, but can be affected by clutter and overdraw for large datasets.
Illustrative vector field visualizations strike a balance between the two abovementioned directions. On the one hand, they use simple, well-understood, visual metaphors for depicting vector fields. On the other hand, they use various rendering effects to simplify the perceived complexity of the final image, thereby making it easier to grasp.
One such method is depth-dependent halos [Everts et al 09].7 The key idea of this method is simple (see Figure 6.37): Given a set of streamlines, represented as curves, the method creates polygon strips of small constant width Δ, centered around the individual streamlines. These strips are similar to stream ribbons (Section 6.5.4). However, in contrast to ribbons, they are oriented parallel to the viewing plane, using a technique known as billboarding. Also, their depth is modulated linearly from being equal to the depth coordinates of the visualized streamlines to a slightly lower value. The effect is similar to constructing a rooftop-like shape that follows the streamlines and is orthogonal to the viewing direction. Additionally, the strips are textured with a one-dimensional monochrome texture that assigns a fixed base color (black) to a narrow band δ around the center (streamline location) and white to the rest (see Figure 6.37 bottom). When rendering these strips with standard depth-buffering enabled, the overall effect is that strips which are close to each other than the value δ get visually merged and appear as a single compact shape rendered in the base color. Strips which are farther away from each other than δ in terms of depth, however, will show a distinct “halo,” as strips closer to the viewer visibly obscure further strips and get delineated by a white border.
Figure 6.37 bottom illustrates the overall effect: Here, seven strips are rendered (A..G). The black rooftop shapes correspond to the base color areas on their textures, and the gray lines show the white texture areas. The strips A..C are both close to each other in screen space and close in terms of depth. As such, they appear on the screen as a single compact black fragment (F1 in the figure). The strips D..G are also close to each other in both screen and depth space, so they also appear as a compact fragment (F2 in the figure). However, the strip group A..C is, seen as a whole, in front of the strip group D..G, so it creates a white halo around it, which makes it possible to separate it from the latter group. Changing the viewpoint by standard user interaction considerably strengthens the separation effect, so that one effectively sees different layers of streamlines located at different depths.
Figure 6.37 (top) illustrates depth-dependent halos for two 3D vector field datasets. The left dataset shows the same vector field used in Figure 6.20. The right dataset shows a vector field describing the air flow in a room due to natural convection. If we compare this image with, for instance, Figure 6.20, we notice that depth-dependent halos can show a much larger amount of streamlines, with limited clutter and good depth perception, even if the rendering is monochrome. Also, we see in both images in Figure 6.20 how streamlines which are close to each other get locally merged into thick black “bundles,” thereby creating an effective image-space simplification and clutter reduction of the rendering. As this method only uses two colors, it can be used, in contrast to most methods described so far in this chapter, for the production of high-quality illustrations for monochrome print targets. A second application of depth-dependent halos for the visualization of tensor fields is discussed further in Section 7.7.
In this chapter, we have presented a number of visualization methods for vector fields. Given the large variety of techniques, but also the difference in focus and goal of the many application domains where vector field visualizations are used, it is hard to provide a simple and uniform classification of these visualization methods. In terms of both visual and implementation complexity, these methods range from simple visual representations supported by a straightforward implementation, such as the vector glyphs, up to multiscale textures animated in real time, supported by complex implementations that combine advanced mathematics and graphics, such as the MIBFV method. In recent years, the increase in computing and graphics-processing power have stimulated the creation of whole new families of vector field visualization techniques that exploit animation and dense visual representations, such as textures, as opposed to the “classical” vector field visualizations that use sparse geometric primitives, such as glyphs, streamlines, and stream surfaces.
Another classification of vector field visualization methods is based on the dimensionality of the data domain. Two-dimensional surfaces, whether planar or curved ones, permit a straightforward mapping to the 2D graphics viewport, which simplifies the visualization problem. Three-dimensional volumetric vector fields pose, in contrast, a much more challenging problem, due to the inherent occlusion of the visualization primitives, especially in the case of dense visualizations. When one adds the time dimension, the problem becomes even more challenging. Animation is an intuitive means of representing the time-dependent aspect.
Being able to visually follow complex 3D flow animations and discern all events of interest that take place in such processes is a difficult problem. Multiscale methods for simplified visual representations of vector fields, which have become increasingly interesting in the last years, are an effective answer to the problem of data size and complexity. However, the challenge of creating insightful visualizations of three-dimensional time-dependent vector fields describing complex phenomena, far from being exhausted, is still an active area of research.