AS explained in Section 3.6.4, tensor data encode some spatial property that varies as a function of position and direction, such as the curvature of a three-dimensional surface at a given point and direction. Most visualization applications deal with rank 2 symmetric tensors, so in this chapter we shall treat only the visualization of such tensors. When represented in a global coordinate system, such tensors are 3 × 3 matrices. Every point in a tensor dataset carries such a matrix as its attribute data.
Tensor datasets are common in different application domains. Properties of 3D surfaces, such as curvature, can be described by curvature tensors. Material properties in mechanical engineering, such as stress and strain in 3D volumes, are described by stress tensors [Reddy 93]. Diffusion of water in tissues can take place either isotropically, that is, with equal speed in every direction, or anisotropically, that is, with different speeds in different directions. For example, in human brain tissue, diffusion is stronger in the direction of the neural fibers and weaker across the fibers. These fibers, consisting of bundles of axons, are also known as white matter, given the characteristic color of the myelin layer covering them. At a given point in the tissue volume, diffusion can be described by a 3 × 3 diffusion tensor matrix. As diffusion is stronger in the direction of the fibers, by measuring the diffusion tensor and visualizing, for example, the direction of strongest diffusion, we can get insight into the complex structure of neural fibers in the human brain. The measurement of the diffusion of water in living tissues is done by a set of techniques known as diffusion tensor magnetic resonance imaging (DT-MRI). The overall process that constructs visualizations of the anatomical structures of interest starting from the measured diffusion data is known as diffusion tensor imaging (DTI) and is an active area of research in scientific visualization and medical imaging. DTI techniques have been used in the diagnosis and analysis of various types of brain diseases, and in the study of the connection between the brain structure and functions (functional brain anatomy).
As we saw in the previous chapter, visualizing 3D vector fields is a difficult problem, since we have to map three independent values to a graphical representation for every data point. In the case of a tensor field, the problem only becomes more difficult. Now we have to visualize a complete 3 × 3 matrix for every data point. We could try to visualize the entries of these matrices as separate scalar fields using scalar visualization methods such as isosurfaces or color-coded slice planes. However, this would not help us much in understanding the way the data encoded by our tensor actually varies as a function of direction. Luckily, tensor data has an intrinsic structure that we can exploit to produce more useful visualizations. Computing this structure is done by a technique called principal component analysis, as explained in the next section.
The structure of this chapter is as follows. We begin by providing in Section 7.1 a short overview of principal component analysis. This technique is used to process a tensor matrix and extract from it information that can directly be used in its visualization, and forms a fundamental ingredient of many tensor data processing and visualization algorithms. In Section 7.4, we show how the results of the principal component analysis can be visualized using the simple color-mapping techniques introduced in Chapter 5. Next, we show in Section 7.5 how the same data can be visualized using tensor glyphs, following the vector visualization techniques introduced in Chapter 6. A further elaboration of the similarities between tensors and vectors is shown in Sections 7.6 and 7.8, which introduce streamline-like visualization techniques for tensor fields. Finally, Section 7.9 concludes this chapter.
Let us consider again, for illustration purposes, the curvature tensor of a 3D surface, which we introduced in Section 3.6.4. Consider a local coordinate system xyz centered at a point x0 on our surface, where the z-axis coincides with n, the surface normal at x0, and the x- and y-axes are tangent to the surface at x0 (see Figure 3.15). Close to x0, our surface can be locally described as some function z = f(x, y), with f(x0) = 0. We have shown that we can compute the normal curvature at some point x0 in some direction s in the tangent plane as the second derivative ∂2f/∂s2 of f using the two-by-two Hessian matrix of partial derivatives of f (see Equation (3.28)). For smooth surfaces, the normal curvature varies smoothly as the direction vector s rotates around the current point in the tangent plane. In many applications, we are actually interested only in the extremal (minimal and maximal) values of the curvature as a function of the direction s. Since these directions depend only on the surface shape at a given point, they are invariant to the choice of the local coordinate system.
How can we compute these directions? Since s is a unit direction vector, we can write it as sT = (cos α, sin α), where α is the angle between s and the x-axis of our local coordinate system, and the superscript T denotes a transposed column vector. Plugging this into Equation (3.28), we get
where hij are the entries of the symmetric Hessian matrix H. This expression is maximal when its derivative with respect to α is zero. This means
It is easy to show that Equation (7.2) is equivalent to the system of equations
where λ is any real value. And Equation (7.3) can be rewritten as
Summarizing, the directions s in the tangent plane for which the normal curvature has extremal values are the solutions of Equation (7.4). For 2 × 2 matrices, we can solve Equation (7.4) analytically, obtaining two solutions λ1 and s1 and λ2 and s2, respectively. For this, we rewrite Hs = λs as (H − λI)s = 0 where I is the 2 × 2 identity matrix. From linear algebra, we know this is equivalent to the determinant of H − λI being equal to zero, as we are looking for nontrivial solutions s ≠ 0. Hence, we get
For a 2 × 2 matrix, Equation (7.5) can be solved analytically and yields our two solutions λ1 and λ2, as follows. For any 2 × 2 matrix
the determinant is det(A) = a11a22 − a12a21. Using this determinant definition, Equation (7.5) becomes
Equation (7.7) is nothing but a second-order equation in λ, whose solutions are our eigenvalues λ1 and λ2. Plugging these into Equation (7.4) immediately yields the corresponding s1 and s2.
Figure 7.1 shows the principal directions of the curvature tensor for a 3D surface. The surface (shown in green) has minimal curvature in the direction s1 (shown in yellow) and maximal curvature in the direction s2 (shown in red). A number of other directions in the tangent plane orthogonal to the surface normal n at the considered point are shown in black. Along these directions, the surface curvature takes values between the minimal and maximal ones.
Figure 7.1. Principal directions of curvature for a surface.
The solutions si of Equation (7.4) are called the principal directions, or eigenvectors, of the tensor H. The quantity encoded in the tensor has extremal values in these directions, equal to λi. These values are also called eigenvalues of the tensor. Computing the eigenvalues and eigenvectors of a tensor by this technique is called principal component analysis. One can show that, for an n×n symmetric matrix, the principal directions are perpendicular to each other. Hence, these directions form a local coordinate system whose axes are directions in which the quantity encoded by the tensor, such as curvature in our example, reaches extremal values.
This reasoning can also be applied identically for a 3 × 3 tensor. For example, in the case of a 3D surface given by an implicit function f(x, y, z) = 0 in global coordinates, we have a 3 × 3 Hessian matrix of partial derivatives, as explained in Section 3.6.4 (see Equation (3.30)). This matrix has three eigenvalues and three eigenvectors that we compute by solving Equation (7.4). These can be computed as follows. Given a 3 × 3 matrix H = (hij), it can be shown that the determinant det(H − λI) is given by
where the quantities J1, J2, and J3 are given by
Equation (7.8) is a third-order equation in λ whose solutions are the three eigenvalues λ1, λ2, and λ3 of the tensor H. These can be computed using analytic formulas for solving Equation (7.8). A better method to use in practice is the Jacobi iteration method, which solves Equation (7.5) numerically for arbitrary-size n × n real symmetric matrices [Press et al. 02].
If we order the eigenvalues in decreasing order λ1 > λ2 > λ3, the corresponding eigenvectors e1, e2, and e3, also called the major, medium, and minor eigenvectors, have geometric meaning: In the case of a curvature tensor, e1 and e2 are tangent to the given surface and give the directions of maximal and minimal normal curvature on the surface, and e3 is equal to the surface normal.
What if, however, several eigenvalues are equal? For a 3 × 3 matrix, there are three cases, as follows:
If λ1 > λ2 = λ3, then we can only determine the major eigenvector e1, and the eigenvectors e2 and e3 can be any two orthogonal vectors which are also orthogonal to e1.
If λ1 = λ2 > λ3, then we can only determine the minor eigenvector e3, and the eigenvectors e1 and e2 can be any two orthogonal vectors which are also orthogonal to e3.
If λ1 = λ2 = λ3, the we cannot determine any eigenvector: Any three orthogonal vectors satisfy Equation (7.4).
Intuitively, when several eigenvalues are equal, one cannot determine preferential directions of maximal or minimal variations of the quantity encoded by the tensor. Consider again a 2 × 2 matrix describing a curvature tensor for several surfaces (see Figure 7.2). The ellipsoid surface in Figure 7.2(a) clearly admits two orthogonal principal directions (eigenvectors) of maximal and minimal curvature, respectively. In both these directions, the curvature is nonzero. A cylinder surface also admits two such unique directions (see Figure 7.2(b)). However, the minimal curvature (eigenvalue) is, in this case, zero. Indeed, the cross section along this direction is a line. On a spherical surface, however, the curvature has equal (nonzero) values in all directions, so the eigenvectors can be any two orthogonal vectors that are tangent to the surface (Figure 7.2(c)). The same situation happens for a plane, where, in addition, the curvature (eigenvalues) is also zero in all directions (Figure 7.2(d)). The same type of situation is encountered for 3 × 3 tensors, too.
Figure 7.2. Principal directions of the curvature tensor for various shapes. Cross sections tangent to the eigenvectors are colored to denote the eigenvalue type. Red denotes the major eigenvector direction; yellow denotes the minor eigenvector direction. Blue denotes cases when the eigenvector directions are arbitrary, as eigenvalues are equal.
In the following sections, we describe several visualization methods for tensor datasets. We begin with the simplest method, which visualizes the individual components of the tensor matrix (Section 7.2) and next detail more advanced visualization techniques that use the results of principal component analysis.
The simplest way to visualize a tensor dataset is to treat it as a set of scalar datasets. Given a 3 × 3 tensor matrix H, we can consider each of its nine components hij as a separate scalar field. Figure 7.3 shows these components for a single 2D slice taken from a brain diffusion tensor volumetric dataset.1
Figure 7.3. Visualization of the nine scalar components hij of a 3 × 3 diffusion tensor from a DT-MRI scan.
Each component of the tensor matrix is visualized using a grayscale colormap that maps scalar value to luminance. Note that, for the ease of interpretation of the images, the points that fall outside the actual tissue, i.e., correspond to air and therefore contain noisy tensor values, have been set to a neutral uniform gray background color. Also, note that due to the symmetry of the tensor matrix, there are only six different images in the visualization (i.e., h12 = h21, h13 = h31, and h23 = h32).
In general, the tensor matrix components encode the second-order partial derivatives of our tensor-encoded quantity with respect to the global coordinate system. Often, the orientation of this system has absolutely no deep relation with the data variation, so these partial derivatives, taken separately, are quite meaningless. For example, in the case of a DT-MRI dataset, the measured tensor describes the diffusion strength of water in the brain tissue with respect to the coordinate frame that describes the position of the patient in the scanner device. Clearly, this coordinate frame has little to do with the actual orientation of the anatomical structures of interest that are to be visualized. In contrast, visualizing the eigenvectors and eigenvalues gives the directions and sizes of extremal variations of our tensor-encoded quantity, which are independent of any coordinate system. If extremal variations are meaningful for our problem then visualizing these eigen-quantities can help. Visualizing the results of the principal component analysis (PCA) analysis is discussed in the following section.
A better alternative to visualizing the tensor matrix components is to focus on data derived from these components that has a more intuitive physical significance.
As a first example, we shall use the average of the diagonal entries
Figure 7.4. Visualization of the mean diffusivity over sagittal, axial, and coronal slices. The small image in the lower-right corner displays the brain surface together with the three slices for orientation purposes.
Further insight can be gained by visualizing the results of principal component analysis. Recall that the eigenvectors of a tensor give the directions of extremal variations of the quantity encoded by the tensor in a given point, and the corresponding eigenvalues given the values of those extremal variations. In case of diffusion data, the eigenvalues can be used to describe the degree of anisotropy of the tissue at a point. In an isotropic medium, all directions are identical. In our particular case, this means the diffusivity has the same value in all directions around a point. Anisotropic media exhibit different properties in different directions. In our case, this means different diffusivities in different directions around a point. Visualizing the anisotropy of a tensor dataset can give valuable insight into separating the neural fibers, which are highly anisotropic, from the rest of the tissue.
Several techniques have been proposed to estimate the anisotropy of a diffusion tensor in medical imaging. All use, in one way or another, the results of PCA on the tensor data. We will now describe a few of the best-known and simplest-to-compute anisotropy measures.
A first set of metrics proposed by Westin [Westin et al. 97] estimates the certainties cl, cp, and cs that a tensor has a linear, planar, or spherical shape, respectively. If the tensor’s eigenvalues are λ1 ≥ λ2 ≥ λ3, the respective certainties are
The expressions of the tensor shape certainties in Equation (7.10) describe how “far off” one eigenvalue is from the smaller ones. The division by the mean diffusivity λ1 + λ2 + λ3 is used to normalize the estimations and obtain dimensionless numbers. Intuitively, the confidence values suggest how diffusion acts in the tissue. Imagine a spherical drop of water placed at the current point and left to diffuse for a while. Its shape will grow faster in the directions of high anisotropy and slower in the other directions. A visualization method that directly uses such shapes to show the tensor eigenvalues and eigenvectors is described in Section 7.5. For the time being, a simple way to use the anisotropy metrics proposed previously is to directly visualize the linear certainty cl scalar signal. High values of this metric indicate regions where the fibers are clearly delineated. Figure 7.5(a) shows the cl certainty plotted on a 2D axial slice. The white area in the middle outlines a highly anisotropic region, anatomically known under the name of corpus callosum, which contains a high density of neural fibers connecting the right and left brain hemispheres. The gray values indicate regions of low anisotropy that correspond to the gray matter tissue in the brain.
Figure 7.5. Different anisotropy measures for diffusion tensor data.
Another frequently used measure for the anisotropy is the fractional anisotropy [Pierpaoli and Basser 96], which is defined as
where
A related measure is the relative anisotropy [Pierpaoli and Basser 96], defined as
Figure 7.5(c) shows the relative anisotropy for the previous dataset.
Overall, the methods presented in this section reduce the visualization of a tensor field to that of one or more scalar quantities, such as the anisotropy, computed from the PCA analysis performed on the tensor data. These can be examined using any of the scalar visualization methods presented in Chapter 5, such as color plots, slice planes, and isosurfaces.
In the previous section, we saw how to visualize various anisotropy metrics computed from the PCA analysis of a tensor field using standard scalar visualization methods such as color mapping. However, in many cases, we are interested in visualizing not just the amount of anisotropy, but also the directions in which this anisotropy takes place.
Let us start with the simpler case when we are interested only in the direction of maximal variation of our tensor-encoded quantity. For this, we can visualize the major eigenvector field using any of the vector visualization methods presented in Chapter 6. Figure 7.6 illustrates an application of this idea. Here, we show a hedgehog plot of the major eigenvector over a coronal slice in the same DT-MRI dataset used in Figure 7.3. Vectors are uniformly seeded at all points where the accuracy of the diffusion measurements is above a certain confidence level (similar to Figure 7.3). The hue of the vector coloring indicates their direction. For this, we use the following simple color-mapping function:
Figure 7.6. Major eigenvector visualized with line glyphs colored by direction.
Using this function, eigenvectors along the x-axis are colored in red, vectors along the y-axis are colored green, and vectors aligned with the z-axis of the dataset coordinate frame are colored blue, respectively. The icon in the bottom-right corner of Figure 7.6 illustrates the direction of color mapping. This icon has to be interpreted as a shaded sphere, where the color of each point maps the direction of the radial vector at that point. The luminance indicates the measurement confidence level. Bright vectors indicate high confidence measurement areas, whereas dark vectors indicate low confidence (noisy) measurements.
In addition to a hedgehog plot, other vector visualization techniques described in Chapter 6 can be used. A relatively popular technique in this class is to simply color map the major eigenvector direction. For this, we use Equation (7.13) to color a slice plane. Figure 7.7 shows the result on a coronal slice of the same brain dataset discussed previously. As discussed in Chapter 5, the advantage of this technique is that it produces a relatively more densely sampled visualization than when using hedgehogs. However, tensor datasets still have at the current moment a relatively low resolution, typically less than 5123 voxels, which will be visible in the color-coded slice planes, too. For example, the slice plane in Figure 7.7 has only 148 × 160 distinct pixels.
Figure 7.7. Major eigenvector direction color-coded on a slice plane.
However insightful, visualizing a single eigenvector or eigenvalue at a time may be not enough. In many cases, the ratios of eigenvalues, rather than their absolute values, are of interest. Consider the surface curvature example. For a plane, both major and medium eigenvalues are zero. For a cylinder, the major eigenvalue gives the cylinder curvature, as computed along one of the circular surface cross sections normal to its axis, whereas the medium eigenvalue is zero, denoting that the cylinder is flat in the direction of its axis. For a sphere, both major and medium eigenvalues are equal, but not zero, since the curvature of all normal cross sections passing through a point on a sphere is the same. How can we visualize all eigenvalues and eigenvectors of some tensor dataset together? Several techniques try to answer this question, by building upon various elements from the scalar and vector visualization techniques presented in the previous chapters. These techniques are presented next.
The method for visualizing tensor data presented next is a generalization of the glyph concept used for visualizing vectors (see Section 6.2). We sample the dataset domain with a number of representative sample points. For each sample point, we construct a tensor glyph that encodes the eigenvalues and eigenvectors of the tensor at that point. For a 2 × 2 tensor dataset, this means encoding two eigenvalues and two eigenvectors per sample point. To do this, we construct a 2D ellipse whose half axes are oriented in the directions of the two eigenvectors and scaled by the absolute values of the eigenvalues. For a 3 × 3 tensor dataset, we construct a 3D ellipsoid that encodes the three eigenvectors and eigenvalues in a similar manner. In both cases, the overall tensor glyph visualization algorithm is quite simple. After performing the principal component analysis at a given sample point, we scale the ellipsoid glyph with the eigenvalues, rotate it using a matrix that has the eigenvectors as columns, and translate it at that point. We repeat the process for all sample points where we want to draw tensor ellipsoids.
Figure 7.8(a) illustrates the shapes that the ellipsoid glyph can assume.3 At the triangle corners, the extremal situations are shown, when each of the linear, planar, and spherical certainties cl, cp, and cs (see Equation (7.10)) has maximal value of one, and the other two are zero. These situations correspond to line, disc, and sphere glyph shapes, respectively. The in-between glyph shapes correspond to different certainty values. Note that the triangular “glyph space” can be parameterized by cl, cp, and cs, using the fact that cl + cp + cs = 1, so we can see the different glyph shapes as reflecting different values of the certainties and the corresponding eigenvalue ratios.
Figure 7.8. Different types of tensor glyphs. (a) Ellipsoids. (b) Cuboids. (c) Cylinders. (d) Superquadrics.
Besides ellipsoids, several other shapes can be used to encode the tensor information, each offering a different trade-off between visual clarity and power of expressing information. For example, we can use parallelepipeds (also called sometimes cuboids) or cylinders instead of ellipsoids. Figures 7.8(b,c) show the shapes such glyphs can assume for different certainties or eigenvalue ratios.
Figure 7.9 demonstrates the use of various shapes to visualize a DT-MRI diffusion tensor dataset. The figure shows a zoomed-in detail of the corpus callosum structure (red glyphs). The glyphs are colored by direction, similar to the hedgehog visualization used in Figure 7.7. The color saturation is modulated using the fractional anisotropy. Saturated glyphs indicate regions of high anisotropy, whereas gray ones indicate low-anisotropy regions. In contrast to the hedgehog visualization in the previous section, where seeds were distributed over a 2D slice, the tensor glyphs are here seeded over a 3D region. As we can see from this figure, smooth glyph shapes like those provided by the ellipsoids provide a less-distracting picture than shapes with sharp edges, such as the cuboids and cylinders.
Figure 7.9. Zoomed-in view of a DT-MRI dataset visualized with (a) ellipsoid, (b) cuboid, (c) cylinder, and (d) superquadric glyphs.
The cuboid, ellipsoid, and cylinder glyphs each have their own advantages and disadvantages. Cuboids are very good at clearly indicating the eigenvector directions with their facets, but thereby also fail to convey the directional ambiguity for eigenvectors corresponding to equal eigenvalues. Cylinders clearly convey the major eigenvector by their axis, but will brusquely rotate their shape by 90 degrees upon small eigenvalue changes (see Figure 7.8(c) middle). This causes confusing discontinuities in the visualization. Ellipsoids do not have any of these problems, but their two-dimensional projection does not always convey a non-ambiguous 3D orientation when viewed from certain angles. To solve this problem, superquadric glyphs have been introduced by Kindlmann [Kindl-mann 04a]. These are defined as superquadric shapes parameterized as functions of the planar and linear certainty metrics cl and cp, respectively [Kindlmann 04b]. If we express the superquadric shape as an implicit function q(x, y, z) = 0, the actual superquadric glyph formulations become
Figure 7.8(d) shows the shapes that the superquadric glyphs can assume for different values of the certainties. Figure 7.9(d) shows the use of superquadric glyphs in visualizing the same tensor dataset that was targeted by cuboids, cylinders, and ellipsoids in the same image.
Yet another tensor glyph used in practice is an axes system, formed by three vector glyphs that separately encode the three eigenvectors scaled by their corresponding eigenvalues. This is essentially nothing but visualizing three superimposed vector fields with vector glyphs, as described in Chapter 6. This method may be easier to interpret for 2D datasets, where the glyph overlap is controllable by limiting the glyph size to the distance between sample points. However, for 3D datasets, ellipsoid glyphs tend to work better than axes glyphs. The latter simply create too much confusion due to the 3D spatial overlap, whereas the rounded, convex ellipsoid shapes tend to be more distinguishable even when a small amount of overlap is present. Just as for vector glyphs, scaling the tensor ellipsoids must be done with care. Eigenvalues can have a large range, so directly scaling the tensor ellipsoids by their values can easily lead to overlapping and/or very thin or very flat glyphs. We can solve this problem as we did for the vector glyphs by imposing a minimal and maximal glyph size, either by clamping or by using a nonlinear value-to-size mapping function.
Overall, tensor glyphs are a probably one of the simplest ways to visualize tensor datasets. However, since they produce a sampled, discontinuous image, tensor glyph visualizations suffer from the same problems as vector glyphs. That is, they are prone to cluttering and have a limited spatial resolution. Moreover, in some datasets such as DT-MRI tensor fields, one is interested in specifically emphasizing certain structures, such as neural fibers, a task that glyphs cannot do. In the next section, we describe a method that is better suited for the visualization of such structures.
The use of tensor glyphs for visualizing tensor fields is analogous to that of vector glyphs, presented in Section 6.2 for visualizing vector fields. It is therefore natural to wonder whether one can construct counterparts to other vector field visualization techniques for visualizing tensor data.
Streamlines are one of the most effective and popular techniques for visualizing vector fields (see Section 6.5). The question arises whether (and how) we can use streamlines to get insight into a tensor field. Let us consider, for illustration purposes, the particular case of a DT-MRI tensor dataset. As explained earlier in this chapter, regions of high anisotropy in general, and of high values of the cl linear certainty metric in particular, correspond to neural fibers aligned with the major eigenvector e1. If we want to visualize the location and direction of such fibers, it is natural to think of tracking the direction of this eigenvector over regions of high anisotropy. In order to do this, we can readily use the streamline technique previously introduced in the context of vector fields.
A typical method for tracking fibers proceeds as follows. First, a seed region is identified. This is a region where the fibers should intersect, so it can be detected, e.g., by thresholding one of the anisotropy metrics presented in Section 7.3. Second, streamlines are densely seeded in this region and traced (integrated) both forward and backward in the major eigenvector field e1 until a desired stop criterion is reached. The stop criterion is, in practice, a combination of various conditions, each of which describes one desired feature of the resulting visualization. These can contain, but are not limited to, a minimal value of the anisotropy metric considered (beyond which the fiber structure becomes less apparent), the maximal fiber length (just as for vector streamlines), exiting or entering a predefined region of interest specified by the user (which can describe a previously segmented anatomical structure), and a maximal distance from other tracked fibers (beyond which the current fiber “strays” from a potential bundle structure that is the target of the visualization).
After the fibers are tracked, they can be visualized using the stream tubes technique (see Section 6.5.2), to further emphasize their geometry. Just as with vector visualization, the constructed tubes can be colored to show the value of a relevant scalar field, e.g., the major eigenvalue, anisotropy metric, or some other quantity scanned along with the tensor data.
The process of tracking fibers in DT-MRI datasets is quite delicate and typically requires a fair amount of user intervention, mainly during the step of defining the seed region. In order to assist users in this process, various integrated tools have been designed that allow the interactive visualization of scalar quantities on slices in the tensor dataset, the computation of anisotropy metrics, and the definition of regions of interest to be used to seed the fiber tracking process. Figure 7.10 shows a snapshot from such a tool, called Slicer.4 In the lower part of the tool snapshot, we see three axial, sagittal, and coronal slices displaying the fractional anisotropy metric, similar to Figure 7.5(b). The middle slice shows, in light blue, a region of interest that has been selected by the user based on the high anisotropy values. This region corresponds to a sagittal cross section through the corpus callosum structure in the brain. The top image in Figure 7.10 shows again the sagittal slice together with fibers tracked from seed points densely distributed in the region of interest. The fibers are colored by the fractional anisotropy metric, using a blue-to-red rainbow colormap, and visualized using stream tubes. The fibers end when the cl linear certainty metric falls below a value of 0.15.
Figure 7.10. Fiber tracking from a user-selected region in the corpus callosum constructed with the Slicer 3D medical visualization tool.
By removing the slice plane from the fiber visualization, we can analyze the resulting fiber structure in more detail (see Figure 7.11(a)). We notice here the symmetric fanning out of the fibers that emerge from the corpus callosum and “radiate” into the two hemispheres of the brain. Besides these fibers, we also notice a number of fibers whose directions are close to horizontal, which correspond to the structure of the corpus callosum itself.
Figure 7.11. (a) Fiber tracking detail of Figure 7.10. (b) Fiber clustering based on the mean closest-point distance.
Fiber tracks are most useful when shown in context of the anatomy of the brain structure being explored. Figure 7.12 illustrates this. The two images show two different views of a set of fiber tracks for a DT-MRI scan of 81 × 106 × 76 voxels. The fibers are seeded densely in the region of the corpus callosum. Fibers are colored in red and shaded using the Phong lighting model, in order to better understand their curved positions in space. To better understand the spatial embedding of the fibers, three elements are added: (a) a slice plane showing the tissue density, using grayscale color mapping; (b) an isosurface of the same scalar value, colored in yellow; and (c) ellipsoid tensor glyphs at all fiber points where the anisotropy exceeds a user-given minimum threshold, colored by the direction of the major eigenvector, using a directional colormap, similar to the one shown in Figures 7.6 and 7.7. Finally, to diminish occlusion but still display the context information, the isosurface is shown only for the part of the dataset located behind the slice plane. The visualization focus stays on the tensor glyphs (showing the highest-anisotropy fiber points), in the context of the shaded fiber tracks, which themselves are shown in the more general context of the brain anatomy provided by the slice and isosurface. For more details on the implementation of this focus-and-context technique, we refer to [Peeters et al. 09].
Figure 7.12. Two views from a focus-and-context DTI visualization showing tensor ellipsoids, fiber tracks, and a slice plane and isosurface of the anisotropy measure. (Images courtesy of A. Vilanova, TU Delft, the Netherlands.)
However useful, this visualization shows a large number of disjoint fibers. In the actual anatomy, fibers are grouped into bundles containing quasiparallel structures. It can be interesting to construct a visualization that mimics this behavior, as described in [O’Donnel and Westin 05]. To do this, we first define the directional similarity of two fibers as follows. Given two fibers a and b that are described as two 3D parametric curves a = a(t) and b = b(t) with t ∈ [0, 1], we define the distance
i.e., as the symmetric mean distance of N of sample points on a fiber to the (closest points on) other fiber. In Equation (7.15), the expression ||p(t),q|| denotes the smallest distance between a point p(t) on a fiber and all points on the fiber q, i.e., minτ∈[0,1] ||p(t) − q(τ)||. Fibers that are parallel and closely located will yield a low distance value. The similarity is defined as the inverse of the distance. Using this distance, the tracked fibers are next clustered in order of increasing distance, i.e., from the most to the least similar, until the desired number of clusters is reached. For this, the simple bottom-up hierarchical agglomerative technique introduced earlier in Section 6.7.3 for vector fields can be used. Figure 7.11(b) shows this technique of the same set of tracked fibers as in Figure 7.11(a). Here, five user-selected clusters are shown, using a different color for the fibers in each cluster. We notice several structures in this visualization that correspond to fibers emerging from distinct regions of the corpus callosum.
Although similar to streamline tracing, fiber tracking poses a number of specific problems. First, tensor data acquired via the current DTMRI scanning technology contains in practice considerable noise and has a sampling frequency that misses several fine-scale details. Given their size, a non-negligible number of fibers can fall in this category. In contrast, many vector fields in the visualization practice come from numerical simulations of physical processes, where there is no acquisition noise involved. Moreover, tensors are not directly produced by the scanning device, but obtained via several preprocessing steps, of which principal component analysis is the last one. All these steps introduce extra inaccuracies in the data, which have to be accounted for. To give just an example, the PCA estimation of eigenvectors can fail if the tensor matrices are not close to being symmetric. Even if the PCA works, fiber tracking needs a strong distinction between the largest eigenvalue and the other two ones, in order to robustly determine the fiber directions. Such a distinction is missing, for example, in areas where two or more fiber bundles cross.
Fiber tracking in DT-MRI datasets is an active area of research. New techniques are being designed for better and easier definition of the seed regions, as well as more robust criteria for stopping the streamlines. New rendering techniques, such as volume rendering with data-driven opacity transfer functions, are also being developed to better convey the complex structures emerging from the tracking process. Although fiber tracking, as a term, is mainly encountered in the medical visualization arena, the techniques presented here can be used for any tensor dataset, once suitable seeding and stopping criteria have been defined.
In terms of toolkits, beside Slicer [Slicer 13] mentioned earlier in this section, the Diffusion Toolkit [Wang et al. 13] offers a powerful suite of utilities for both tracing fibers and visualizing the trace results from a wide range of DT-MRI dataset types. In contrast to Slicer, which is a more general framework for analyzing and visualizing 3D slice-based data volumes, the Diffusion Toolkit focuses on DT-MRI datasets, and thus offers more extensive and easier to use options for fiber tracking.
Apart from tracking variations, all visualization methods described in Section 7.6 draw the actual fibers as streamlines or stream tubes. While this approach is simple to implement and gives a “raw” view on the fiber data, it has several problems:
Region structure: Fibers, by definition, are one-dimensional objects. However, as indicated in Section 7.3, DTI datasets contain both regions of linear and planar anisotropy. To better understand the structure of the DTI tensor field, we would like to see the former regions rendered with fibers, and the latter regions rendered as surfaces.
Simplification: Densely-seeded fiber datasets can become highly cluttered. This makes it hard to discern the global structure implied by the fibers. Much like for vector fields (Section 6.7), a simplified visualization of fibers can be useful. In particular, techniques that help understanding the relative depths of fibers in a rendering are useful.
Context: Fibers exist in the context of a volumetric DTI scan. As such, showing combined visualizations of fibers and tissue density can provide more insight into the spatial distribution and connectivity patterns implied by fibers.
We next present a set of simple step-by-step techniques that address the above goals.
We start with a DTI volume of 1283 voxels. We next densely seed this volume and trace 150K fibers, using the Diffusion Toolkit software [Wang et al. 13]. Each resulting fiber is represented as a polyline consisting of an ordered set of 3D vertex coordinates. Figure 7.13(a) shows the resulting fibers, colored with a directional colormap. Although the volume is densely covered by fiber tracks, we clearly cannot see much structure in this image, due to occlusion.
Figure 7.13. Illustrative rendering of a fiber dataset (a) using alpha blending (b), anisotropy-based blending (c), sprite textures (d), and depth-dependent-halos (e).
One simple step to reduce occlusion and see “inside” the fiber volume is to use additive alpha blending (Section 2.5). However, to get the expected blending results, fibers need to be sorted back-to-front as seen from the viewing angle. One simple way to do this efficiently is to transform all fiber vertices in eye coordinates, i.e., in a coordinate frame where the x- and y-axes match the screen x- and y-axes, and the z-axis is parallel to the view vector, and next to sort them based on their z value. Although sorting has to be implemented on the CPU, e.g., using the standard C++ template library, and it needs to be executed every time we change the viewing direction, this solution can deliver interactive frame rates for a few million vertices on a modern PC computer. Figure 7.13(b) shows the result for an alpha value of 0.05. Compared to the opaque rendering in Figure 7.13(a), we now see the overall and inner fiber structure much better.
Although alpha blending reduces occlusion and better shows the fiber-set structure, it also acts in a global manner. We, however, are specifically interested in regions of high anisotropy. To emphasize such regions, we next modulate the colors of the drawn fiber points by the value of the combined linear-and-planar anisotropy
where cl, cp, and cs are the linear, planar, and spherical anisotropy metrics given by Equation 7.10. Figure 7.13(c) shows the result, where we render fiber points having ca > 0.2 color-coded by direction, and all other fiber points in gray. This image shows thus well the fiber subset which passes through regions of linear and/or planar anisotropy, i.e., separates interesting from less interesting fibers. Note the difference of this technique of postprocessing fibers to eliminate uninteresting regions after tracing from the techniques presented in Section 7.6, where we used anisotropy to select the fiber seed points. The difference is subtle, but important: Using anisotropy to cull seed points will eliminate entire fibers seeded in these regions. Using anisotropy to cull fiber fragments after tracing will eliminate only fiber fragments passing through these regions. In general, the second option represents a less aggressive culling, thus offers more chances for meaningful fiber fragments to exist in the final visualization, without having to be very precise in the selection of the anisotropy threshold used.
A next step towards simplifying the rendering of the resulting fiber dataset is to use various existing rendering techniques for streamlines. Figure 7.13(d) shows a first option. Here, we construct stream tube-like structures around the rendered fibers. However, instead of using the 3D space stream tube algorithm presented in Section 6.5.2, we proceed here differently: We densely sample all fiber polylines, and render each resulting vertex with an OpenGL sprite primitive [Shreiner 04] that uses a small 2D texture. The texture encodes the shading profile of a sphere, i.e., is bright at the middle and dark at the border (see Figure 7.13, top-right). The size of the texture, which is 10 by 10 pixels in Figure 7.13(d), gives the apparent thickness of the emerging tubes. Compared to stream tubes, the advantage of this technique is that it is much simpler to implement, and also much faster. Indeed, rather than having to construct the complex geometry of a 3D tube surrounding each streamline, and render the resulting set of polygons, we now only render one small 2D texture per fiber vertex. Compared to stream tubes, the only disadvantage is that this technique works in image space: If we zoom in, sample points will become more far apart in screen space, thus the discrete nature of the overlapping sphere textures will become apparent. However, this problem can be easily countered by ensuring a dense enough sampling for the visualized fibers, so that the screen-space distance between two consecutive vertices on a fiber never exceeds half of the texture size.
A second option for illustrative (simplified) rendering of fiber tracks entails using the depth-dependent halos method presented for vector field streamlines in Section 6.8. Since this method only requires a set of 3D polylines, it can be directly applied to our fiber tracks. Figure 7.13(e) shows the result on our fiber dataset. As for vector field streamlines, depth-dependent halos effectively merge dense fiber regions into compact black areas, but separate fibers having different depth by a thin white halo border. Together with interactive viewpoint manipulation, this helps users in perceiving the relative depths of different fiber sets. Depth-dependent halos have several subtle differences as compared to the sprite-based visualization in Figure 7.13(d). First, halos generate a monochrome image, while sprites generate a grayscale image, the latter showing the tubular fiber structure in a more intuitive manner. However, halos effectively merge close fibers into compact, same-color regions better than sprites, thereby generating an arguably simpler visualization.
Although the illustrative rendering techniques presented above can improve the perception of depth and apparent structure present in the fiber set, we still have two problem that we cannot easily visually distinguish regions of linear and planar anisotropy from each other. This issue is due to the fact, despite the dense seed sampling used, the local fiber density is not easily controlled by the user, but determined by the streamline tracing algorithm. A consequence hereof is that we cannot visually classify dense fiber regions as being (a) either thick tubular fiber bundles or (b) planar anisotropy regions covered by fibers.
A first step towards addressing this problem is to geometrically simplify the structure of the fiber set. For this, we apply a clustering algorithm, as follows: Given a set of fibers, we first estimate a 3D fiber density field ρ : ℝ3 → ℝ+, by convolving the positions of all fiber vertices, or sample points, with a 3D monotonically decaying kernel, such a Gaussian or convex parabolic function. Next, we advect each sample point upstream in the normalized gradient ∇ρ/||∇ρ|| of the density field, and recompute the density ρ of the new fiber sample points. Iterating this process 10..20 times effectively shifts the fibers towards their local density maxima. In other words, this creates compact “fiber bundles” that describe groups of fibers which are locally close to each other. The bundling technique used here is essentially identical to the kernel density estimation edge-bundling (KDEEB) technique presented further in Section 11.4.2 for the simplified visualization of graph datasets. The only difference is that, for fibers, we apply the bundling process in 3D space. Figure 7.14(a) shows the bundled fibers, rendered with directional color-coding, superimposed over the original unbundled fibers, rendered gray. As visible, the bundled fibers occupy much less space, thus allow a better perception of the structure of the brain connectivity pattern they imply. We also see how the bundled fibers are positioned close to the local density maxima of the unbundled, gray, fibers. An implementation of fiber bundling is presented in [Böttger et al. 14].
Figure 7.14. Bundled visualizations of the fiber dataset in Figure 7.13(a). (a) Isotropic fiber bundles rendered in the context of the original fiber dataset, drawn in gray. (b) Fiber bundles rendered as tubes. (c) Anisotropic fiber bundling. (d) Fiber bundles rendered in the context of a volume-rendered CT scan, drawn in gray. (Data courtesy of T. Isenberg, INRIA, France.)
Fiber bundling can be effectively combined with the rendering techniques presented earlier in this section. Figure 7.14(b) illustrates this by rendering the bundled fibers with a sprite texture to generate rube-like structures. Here, we use a kernel of smaller radius for estimating the fiber density ρ. As visible when compared to the colored fibers in Figure 7.14(a), the bundles exhibit now a more complex, branching, structure. Still, the amount of empty space between the bundled fibers and the unbundled ones is quite large, which allows a better view inside the dataset.
However effective in reducing spatial occlusion and thereby simplifying the resulting visualization, fiber bundling suffers from two problems. First, planar anisotropy regions, such as the corpus callosum region that connects the left and right brain hemispheres, is reduced to a few one-dimensional bundles. This conveys the wrong impression (when visualizing the bundles) that this region consists of a few tube-like structures, rather than a 2D surface. Second, and more importantly, bundling effectively changes the positions of fibers. As such, fiber bundles should be interpreted with great care, as they illustrate only connectivity patterns, but have limited geometrical meaning.
To address the first problem mentioned above, we can modify the fiber bundling algorithm as follows. Instead of using an isotropic spherical kernel to estimate the fiber density, we can use an ellipsoidal kernel, whose axes are oriented along the directions of the eigenvectors of the DTI tensor field, and scaled by the reciprocals of the eigenvalues of the same field. The shapes of the kernels used at different spatial points will now coincide with the shapes of the ellipsoid glyphs used for visualizing DTI fields which were presented in Section 7.5, with the difference that our kernels are now narrow along large eigenvectors and narrow along large eigenvectors. Besides this change, the remainder of the bundling algorithm stays the same. The effect of this modification are as follows: In linear anisotropy regions, fibers will strongly bundle towards the local density center, but barely shift in their tangent directions. In planar anisotropy regions, fibers will strongly bundle towards the implicit fiber-plane, but barely shift across this plane. Additionally, we use the values of cl and cp (Equation 7.10) to render the above two fiber types differently. For fiber points located in linear anisotropy regions (cl large), we render point sprites using spherical textures, as used earlier to render tubes. For fiber points located in planar anisotropy regions (cp large), we render 2D quads oriented perpendicular to the direction of the eigenvector corresponding to the smallest eigenvalue, i.e., tangent to the two underlying fiber-plane. Figure 7.14(c) shows the result. In linear anisotropy regions, located close to the fiber terminations, we see tube-like structures. In planar anisotropy regions, such as the corpus callosum, we effectively see a planar-like structure. Overall, the effect is to simplify the fiber-set in tubular regions, but keep the planar regions intact.
Fiber bundling is a promising direction for the generation of simplified structural visualizations of fiber tracts for DTI fields. However, as mentioned, a major objection is that bundles do not show the actual fiber positions, and thereby potentially convey erroneous anatomical insights on the brain structure. As this family of techniques is quite new, further field evaluations are required to analyze the strengths and limitations of such visualizations.
As already outlined, fibers do not exist in a void, but are structures that connect parts of an anatomic structure such as a brain. It thus is natural to augment fiber visualizations by showing the anatomical context in which fibers exist. Figure 7.14(d) illustrates this. Here, we show fiber bundles (shown as blue-white structures) with alpha blending together with a volume rendering of a CT dataset of the same brain (shown in gray). Bundles are now much easier to place in a spatial context, i.e., we can determine which are the actual brain regions that fibers connect by correlating the structures visible in the CT rendering with the displayed fiber bundles. One added value of this technique is that the geometric distortion caused by bundling is now partially compensated by showing the original, undistorted, CT scan data.
In the previous section, we saw how fiber tracking can be used to visualize tensor data. Essentially, the principle of fiber tracking is based on integrating streamlines along the major eigenvector component of the tensor field, using various stop criteria determined by other derived quantities from the tensor data such as, for example, the anisotropy measure in case of DT-MRI datasets. However, fibers do not visualize directional information from the tensor field beyond the major eigenvector. As discussed in Section 7.1, this information is important, as it gives directional insight in how the tensor anisotropy varies in space. In contrast, tensor glyphs did visualize this information, but lacked, just as their vector counterparts, the spatial continuity of streamlines. The question arises whether we can enhance the streamline metaphor to visualize this additional information, i.e., combine the advantages of streamlines and tensor glyphs.
Hyperstreamlines provide an answer to this question. Their principle is quite simple. First, we perform principal component analysis as explained in Section 7.1 to decompose the tensor field into three eigenvector fields ei and three corresponding scalar eigenvalue fields λ1 ≥ λ2 ≥ λ3. Next, we construct stream tubes in the major eigenvector field e1, just as done for the fiber tracking method described in Section 7.6. At each point along such a stream tube, we now wish to visualize the medium and minor eigenvectors e2 and e3. For this, instead of using a circular cross section of constant size and shape, as we did for the fiber tracking, we now use an elliptic cross section, whose axes are oriented along the directions of the medium and minor eigenvectors e2 and e3 and scaled by λ2 and λ3, respectively.
Figure 7.15 illustrates this process for a hyperstreamline traced between two points A and B in a tensor field. The local thickness of the hyperstreamlines gives the absolute values of the tensor eigenvalues, whereas the ellipse shape indicates their relative values as well as the orientation of the eigenvector frame along a streamline. Circular cross sections indicate that the medium and minor eigenvalues are equal. If we want to show the value of the major eigenvalue, we can encode it as color.
Figure 7.15. Hyperstreamline construction. The major, medium, and minor eigenvectors at the hyperstreamline’s start and end points A and B are depicted in blue, red, and green, respectively. The streamline of the major eigenvector field e1 is drawn dashed.
Figure 7.16 shows the usage of hyperstreamlines in a diffusion tensor imaging (DT-MRI) brain dataset. Several hyperstreamlines are seeded at a number of locations in the dataset, following the techniques described for fiber tracking in Section 7.6. However, instead of tracing stream tubes of circular cross section, we now use hyperstreamlines. Color indicates the local direction of the hyper-streamlines, following the technique discussed in Section 7.4. Tracing the hyper-streamlines is stopped when the local anisotropy falls below a certain threshold. This is visible in the fact that some hyperstreamlines end in large, funnel-like, structures. At these points, the eigenvalues corresponding to the medium and minor eigenvectors are relatively large, so the anisotropy is low, denoting a less pronounced fiber structure.
Figure 7.16. DT-MRI brain dataset visualized with hyperstreamlines colored by direction. (Image courtesy of A. Vilanova, TU Delft, the Netherlands.)
Several variations of this construction are possible. Any of the three eigenvectors can be used for the hyperstreamline direction. Besides ellipses, other shapes can be used for the cross section. For example, we can use a cross whose arms are scaled and rotated to represent the medium and minor eigenvectors. In general, hyperstreamlines provide better visualizations than tensor glyphs. However, just as for the standard streamlines, appropriate seed points and hyperstreamline lengths must be chosen to appropriately cover the domain, which can be a delicate process. Moreover, scaling the cross sections must be done with care, in order to avoid overly thick hyperstreamlines that cause occlusion or even self-intersection. For this, we can use the same size scaling techniques as for vector and tensor glyphs (see Section 6.2).
In this chapter, we have presented a number of methods for visualizing tensor data. Starting from a 2D or 3D dataset containing 2 × 2 or 3 × 3 tensor matrices at each sample point that typically contains second-order partial derivatives of some quantity, we use principal component analysis (PCA) to extract the eigenvectors and eigenvalues of the tensor data. These describe the directions of extremal variation of the quantity encoded by the tensor. These directions are independent of the coordinate frame in which the partial derivatives contained in the tensor matrix have been computed. In many applications, these directions have a particular meaning, so they are a prime input for the visualization.
Tensor data van be visualized by reducing it to one scalar or vector field, which is then depicted by specific scalar or vector visualization techniques. These scalar or vector fields can be the direct outputs of the PCA analysis (eigenvalues and eigenvectors) or derived quantities, such as various anisotropy metrics. Alternatively, tensors can be visualized by displaying several of the PCA results combined in the same view, such as done by the tensor glyphs or hyperstream-lines.
Tensor visualization is an active, growing research area. Many visualization methods have emerged that target particular application areas that have specific questions, such as clinical investigations of DT-MRI medical datasets. These visualization methods often integrate more datasets in one single view, apart from the tensor information, and also provide sophisticated user interaction mechanisms for exploring the datasets, such as selecting regions of interest, adjusting color transfer functions, and controlling the various parameters of the visualization process. For more detailed information on these tools and techniques, we refer to the documentation of the tools themselves [Kindlmann 06, Slicer 13, Schroeder et al. 06, National Library of Medicine 14].