There are two main ways to construct a geometric fractal, or equivalently, two main classes of geometric fractals [Tél 88]. The first class describes a bounded geometric object e.g., the middle-third Cantor set or the Sierpiński triangle, for which the distance between points, or the diameter of replicated self-similar patterns, vanishes. There is no smallest scale, and we “zoom in” on the object with a magnification scale that approaches infinity to view the object features which are growing infinitely small. To calculate a fractal dimension (e.g., the box counting dimension dB, the Hausdorff dimension dH, or the correlation dimension dC), we let the box size or radius approach zero.
The second class of fractals described in [Tél 88] describes geometric objects whose size increases to infinity. Such objects often are embedded in some regular lattice
for which the inter-lattice spacing remains constant while the size of the object increases to infinity. This class characterizes objects such as aggregates constructed by a growth process, e.g., powder packing or DLA (Sect. 10.4). For such objects, there is no largest scale, and we “zoom out” on the object with an ever-increasing window size to see the growing object. This second class also describes networks whose size grows to infinity. A simple example of such a network is the square rectilinear grid in studied in Sect. 11.4, where we let the side length approach infinity.
The starting point for all our fractal dimensions has been the scaling relationship bulk ∼sized presented in the beginning of Chap. 1. In this chapter we consider this scaling for Tél’s second class, in particular for a sequence of networks such that grows infinitely large, where by “ grows infinitely large” we mean as t →∞. For brevity we write to denote .
One possible approach to developing a scaling law for is to compute a minimal cover of by boxes of size s and study how the number of boxes in scales as s →∞ and t →∞. Another possible approach is to study how the correlation sum of , defined by (9.11), scales as r →∞ and t →∞. A third and much simpler approach is to study how the massNt of , which we define to be the number of nodes in , scales with . Defining
(12.1)
the fractal dimension proposed in [Zhang 08] to characterize is given by Definition 12.1 below.1
Definition12.1 If
(12.2)
exists and is finite, then dM is the mass dimension of the sequence and is fractal. If the limit (12.2) is infinite, then is non-fractal. □
According to Definition 12.1, the sequence is fractal if dM exists and is finite; otherwise, is non-fractal. This is a very different meaning of “fractal” than given by Definition 7.5, which says that a given finite network is fractal if dB exists.
Although the scalings and (see (11.8) in Sect. 11.1) are similar, Definition 12.1 defines dM only for an infinite sequence of networks but not for a single finite network. An advantage of dM over the correlation dimension dC is that it is typically much easier to compute the network diameter than to compute C(n, r) for each n and r, as is required to compute C(r) using (11.6).
Exercise12.1 Can Definition 12.1 be adapted to define dM for a finite network? Would such a definition be useful? □
In the next two sections we examine three methods of constructing a sequence of networks such that Δt →∞. The constructions are parameterized by a probability p ∈ [0, 1]. For the first two constructions studied in Sect. 12.1, is (using the terminology of Definition 12.1) fractal. For the third construction, studied in Sect. 12.2, is non-fractal, which leads to the definition of the transfractal dimension of the sequence . In the remaining sections of this chapter we present the pioneering results of Nowotny and Requardt [Nowotny 88] on the fractal dimension of infinite networks.
12.1 Mass Dimension
A procedure was presented in [Zhang 08] that uses a probability p to construct a sequence that exhibits a transition from fractal to non-fractal behavior as p increases from 0 to 1. For p = 0, the sequence does not exhibit the small-world property and has dM = 2, while for p = 1 the sequence does exhibit the small-world property and dM = ∞. The construction begins with , which is a single arc, and p ∈ [0, 1]. Let be the network after t steps. The network is derived from .
For each arc in , with probability p we replace the arc with a path of 3 hops (introducing the two nodes c and d, as illustrated by the top branch of Fig. 12.1), and with probability 1 − p we replace the arc with a path of 4 hops (introducing the three new nodes c, d, and e, as illustrated by the bottom branch of Fig. 12.1). For p = 1, the first three generations of this construction yield the networks of Fig. 12.2.
Figure 12.1
Constructing from
Figure 12.2
Three generations with p = 1
Colored arcs in the networks for t = 1 and t = 2 show the construction with p = 1: the red arc in the t = 1 network becomes the center arc in a chain of three red arcs in the t = 2 network, the green arc in the t = 1 network becomes the center arc in a chain of three green arcs in the t = 2 network, and similarly for the blue arc in the t = 1 network. For p = 0, the first three generations of this construction yield the networks of Fig. 12.3. This construction builds upon the construction in [Rozenfeld 09] of (u, v) trees.
Figure 12.3
Three generations with p = 0
Let Nt be the expected number of nodes in , let At be the expected number of arcs in , and let Δt be the expected diameter of . The quantities Nt, At, and Δt depend on p, but for notational simplicity we omit that dependence. Since each arc is replaced by three arcs with probability p, and by four arcs with probability 1 − p, for t ≥ 1 we have
(12.3)
where the final equality follows as A0 = 1, since consists of a single arc.
Let xt be the number of new nodes created in the generation of . Since each existing arc spawns two new nodes with probability p and spawns three new nodes with probability 1 − p, from (12.3) we have
(12.4)
Since has 2 nodes, for t ≥ 1 we have
(12.5)
To compute the diameter Δt of , first consider the case p = 1. For this case, distances between existing node pairs are not altered when new nodes are added. At each time step, the network diameter increases by two. Since Δ0 = 1 then Δt = 2t + 1. Since Nt ∼ (4 − p)t, then the network diameter grows as the logarithm of the number of nodes, so exhibits the small-world property for p = 1. From (12.2) we have dM = ∞.
Now consider the case 0 ≤ p < 1. For this case, the distances between existing nodes are increased. Consider an arc in the network , and the endpoints i and j of this arc. With probability p, the distance between i and j in is 1, and with probability 1 − p, the distance between i and j in is 2. The expected distance between i and j in is therefore p + 2(1 − p) = 2 − p. Since each is a tree, for t ≥ 1 we have
so dM is finite, and does not exhibit the small-world property. For p = 0 we have . Also, as p → 1.
Next we present the analysis in [Li 14] of the properties of the family of networks introduced in [Gallos 07a]. The parameters used to construct these networks are two positive integers m and x, where x ≤ m, and α ∈ [0, 1]. The network at time t = 0 consists of a single arc. The network is derived from as follows.
Step 1. For each arc in , connect each endpoint of the arc to m new nodes. This step creates a total of new nodes. This is illustrated, for a single arc, in Fig. 12.4 for m = 3. Node u spawns nodes c, e, and g, and node v spawns nodes d, f, and h.
Figure 12.4
and for x = 2 and m = 3
Step 2. Generate a random number p, using the uniform distribution on [0, 1]. (Thus p will vary from generation to generation.)
Step 3. We consider two cases, both illustrated in Fig. 12.4 for m = 3 and x = 2. Case 1. If α ≤ p ≤ 1, none of the arcs in is an arc in . However, for each we create x new arcs as follows. Let u and v be the endpoints of a. We select x of the m nodes created in Step 1 that are attached to u, and x of the m nodes created in Step 1 that are attached to v, and connect these two sets of x nodes by x arcs. In Fig. 12.4, arc (u, v) is not preserved, and arcs (c, d) and (e, f) are created. Case 2. If 0 ≤ p < α, each is also an arc in . Additionally, for each we create x − 1 new arcs as follows. Let i and j be the endpoints of a. We select x − 1 of the m nodes created in Step 1 that are attached to u, and x − 1 of the m nodes created in Step 1 that are attached to v, and connect these two sets of x − 1 nodes by x − 1 arcs. In Fig. 12.4, arc (u, v) is preserved, and arc (e, f) is created. Figure 12.5 illustrates three generations when α = 0, m = 2, and x = 2. Since α = 0, we are always in Case 1 of Step 3.
Figure 12.5
and and for α = 0 and x = 2 and m = 2
In general, for this construction we have which implies At = (2m + x)t. As for the number of nodes, each arc in spawns 2m new nodes, so
which has the solution
(12.8)
Let Ht be the hop count
between two nodes of . From Fig. 12.4 we see that with probability 1 − α we have Ht+1 = 3Hp and with probability α we have Ht+1 = Ht. Thus Ht+1 = (1 − α)3Ht + αHt = (3 − 2α)Ht, which implies Δt+1 = (3 − 2α)Δt. Since Δ0 = 1 then Δt = (3 − 2α)t. By (12.2) the mass
dimension dM of the infinite sequence is
(12.9)
Next we examine how the node degree changes each generation. For a given t, let n be any node in and consider the node degree δn. Since n is the endpoint of δn arcs, then in Step 1 node n gets mδn new neighbors. In Step 3, if Case 1 occurs, which happens with probability 1 − α, node n loses each of the δn arcs currently incident to n. If Case 2 occurs, which happens with probability α, the x − 1 new arcs created are not incident to n. Thus the node degree of n in is δn + mδn − (1 − α)δn = δn(m + α), so the degree of each node increases by the factor m + α. It was shown in [Song 06, Li 14] that is scale-free, with the node degree distribution pk ∼ k−γ, where
(12.10)
12.2 Transfractal Networks
A deterministic recursive construction can be used to create a sequence of self-similar networks. Each is called a (u, v)-flower, where u and v are positive integers [Rozenfeld 09]. By varying u and v, both fractal and non-fractal sequences can be generated. The construction starts at time t = 1 with a cyclic graph (a ring), with w ≡ u + v arcs and w nodes. At time t + 1, replace each arc of the time t network by two parallel paths, one with u arcs, and one with v arcs. Without loss of generality, assume u ≤ v. Figure 12.6 illustrates three generations of a (1, 3)-flower. The t = 1 network has 4 arcs. To generate the t = 2 network, arc a is replaced by the path {b} with one arc, and also by the path {c, d, e} with three arcs; the other three arcs in subfigure (i) are similarly replaced. To generate the t = 3 network, arc d is replaced by the path {p} with one arc, and also by the path {q, r, s} with three arcs; the other fifteen arcs in subfigure (ii) are similarly replaced. The self-similarity of the (u, v)-flowers follows from an equivalent method of construction: generate the time t + 1 network by making w copies of the time t network, and joining the copies at the hubs.
Figure 12.6
Three generations of a (1, 3)-flower
Let denote the (u, v)-flower at time t. The number of arcs in is At = wt = (u + v)t. The number Nt of nodes in satisfies the recursion Nt = wNt−1 − w. With the boundary condition N1 = w we obtain [Rozenfeld 09]
(12.11)
Consider the case u = 1. It can be shown [Rozenfeld 09] that for (1, v)-flowers and odd v we have
while in general, for (1, v)-flowers and any v,
(12.12)
Since Nt ∼ wt then , so (1, v)-flowers enjoy the small-world property. By (12.2), (12.11), and (12.12), for (1, v)-flowers we have
(12.13)
so by Definition 12.1 the sequence of (1, v)-flowers is non-fractal.
We want to define a new type of fractal dimension that is finite for the sequence of (1, v)-flowers and for other non-fractal sequences of networks. For (1, v)-flowers, from (12.11) we have
as t →∞ and . From (12.12) we have Δt ∼ (v − 1)t as t →∞. Since both and Δt behave like a linear function of t as t →∞, but with different slopes, let dE be the ratio of the slopes.2 Then
which says that, for t ≫ 1, when the diameter increases by αt, the number of nodes increases by a factor which is exponential in dEαt. As observed in [Rozenfeld 09], in (12.18) there is some arbitrariness in the selection of e as the base of the exponential term since from (12.14) the numerical value of dE depends on the logarithm base.
Definition12.2 [Rozenfeld 09] If (12.18) holds as t →∞ for a sequence of self-similar graphs , then dE is called the transfinite fractal dimension. A sequence of self-similar networks whose mass
dimension dM is infinite, but whose transfinite fractal dimension dE is finite, is called a transfinite fractal, or simply transfractal. □
The transfractal dimension “usefully distinguishes between different graphs of infinite dimensionality” [Rozenfeld 09]. Thus the sequence of (1, v)-flowers is transfractal with transfinite fractal dimension . Finally, consider (u, v)-flowers with u > 1. It can be shown [Rozenfeld 09] that Δt ∼ ut. Using (12.11),
so
Since dM is finite, this sequence of networks is fractal, not transfractal, and these networks do not enjoy the small-world property. In [Komjáthy 19], the definition of the transfractal dimension was extended by considering the box counting dimension for infinite sequences of graphs.
12.3 Volume and Surface Dimensions
In this section we study two very early definitions in [Nowotny 88] of the dimension of an infinite network . Unlike the previous two sections of this chapter, in this section we are not studying a sequence of networks such that Δt →∞; rather, we are studying one network whose diameter is infinite. It might be that can be viewed as the limit of a sequence , e.g., an infinite rectilinear grid network embedded in can be viewed as the limit, as K →∞, of a K × K rectilinear grid (as studied in Sect. 11.4), but we do not assume that is the limit of some sequence of networks. Since the two network dimensions of [Nowotny 88] are defined in terms of a limit as r →∞, these dimensions are related to the mass dimension dM.
The pioneering paper [Nowotny 88] is not well-known, and indeed has barely been mentioned in the bibliographies and reference lists of the bibliographic entries of this book.3 The motivation for the two dimensions proposed in [Nowotny 88] is a working philosophy that both physics and the corresponding mathematics are discrete, not continuous, at the “primordial” (i.e., Planck) scale. This discrete primordial substratum consists of elementary cells or modules, interacting with each other over a network of elementary interactions. The term “discrete” in this context does not necessarily imply a lattice or some other countable structure, but rather implies the absence of the usual continuum concepts [Requardt 06]. Restricting the analysis to countably infinite graphs is advantageous, since it facilitates modelling and technical analysis. The network studied in [Nowotny 88] was assumed to be dynamic and highly erratic, where even the distribution of links is dynamical. These networks could be represented using cellular autonoma models; however, such models typically are embedded in fixed and regular geometric arrays. In contrast, the approach of [Nowotny 88] ignored the embedding space and considered an irregular array of nodes and links which dynamically arrange themselves according to some given evolution law. The hope is that, under certain favorable conditions, the system will undergo a series of phase transitions from a disordered chaotic initial state to some kind of macroscopically ordered extended pattern, which can be associated with classical spacetime. From a discrete model, the continuum concepts of ordinary spacetime physics arises via a kind of geometric renormalization group process on the much coarser scale corresponding to the comparatively small energies of present high-energy physics. Spacetime possibly emerges via a dynamical process, perhaps resembling a phase transition, from a more chaotic and violent initial phase. The continuum limit is a fixed point of this process, and this idea has some “vague similarities” to the renormalization group process (Chap. 21) of statistical mechanics [Requardt 06]. Computing a dimension for this initial phase, which is discrete but perhaps with complicated interactions, appears to be extremely useful. A “collective geometric” property like dimension suggests that, after some coarse graining and/or scaling, the network displays global smoothness properties which may indicate the transition into a continuum-like macro state [Nowotny 07]. Nowotny and Requardt observed that their work is related to other research in quantum gravity, e.g., to “prophetic” work by Penrose who introduced spin networks, which can be viewed as a type of dynamic graph.
As additional motivation for their definitions of dimension, [Nowotny 07] noted that in the renormalization and coarse graining of the discrete primordial network, more and more nodes and arcs are absorbed into the infinitesimal neighborhoods of the emerging points of the continuum. Hence a local definition of dimension in the continuum is actually a large scale concept in the network, involving practically infinitely many nodes and their interconnections. They were motivated by wondering what intrinsic global property, independent of some dimension in which the network is embedded, is relevant for the occurrence of such phenomena as phase transitions and critical behavior. They wound up with the answer that the relevant property is the number of new nodes or degrees of freedom seen as the distance from a given node increases.
Assume that is an unweighted and undirected network, where both and are countably infinite sets. Assume the node degree of each node is finite. For and for each nonnegative integer r, define
(12.19)
so is the set of nodes whose distance from n does not exceed r. Define
(12.20)
(12.21)
(12.22)
Definition12.3 If for some we have , we say that has local volume dimensiondV(n) at n, where . If also dV(n) = dV for each n in , then we say that the volume dimension of is dV. □
The dimension dV was called the “internal scaling dimension” in [Nowotny 88], but we prefer the term “volume dimension”. If dV exists then for we have as r →∞. Note the similarity between the definition of dV(n) and definition (9.3) of the pointwise mass dimension of a geometric object; the difference is that (12.21) and (12.22) use a limit as r →∞, while (9.3) uses a limit as r → 0. The volume dimension dV(n) somewhat resembles the correlation dimension dC defined by (11.8); the chief difference is that dC is defined in terms of the correlation sum (9.11), which is an average fraction of mass, where the average is over all nodes in a finite network, while dV is defined in terms of an infimum and supremum over all nodes. The volume dimension dV also somewhat resembles the mass dimension dM defined by (12.2); the chief difference is that dM is defined in terms of the network diameter, while dV is defined in terms of the number of nodes within radius r of node n.
The second definition of a network dimension proposed in [Nowotny 88] is based on the boundary (i.e., surface) of , rather than on the entire neighborhood . For and for each nonnegative integer r, define
(12.23)
so is the set of nodes whose distance from n is exactly r. For each n we have
and . Define
(12.24)
(12.25)
Definition12.4 If , we say that has surface dimensiondU(n) at n, where . If for each n in we have dU(n) = dU, then we say that the surface dimension of is dU. □
This dimension was called the “connectivity dimension” in [Nowotny 88], since “it reflects to some extent the way the node states are interacting with each other over larger distances via the various bond sequences connecting them”, but we prefer the term “surface dimension”.4 If dU exists, then for we have as r →∞. The volume dimension dV is “rather a mathematical concept and is related to well-known dimensional concepts in fractal geometry”, while the surface dimension dU “seems to be a more physical concept as it measures more precisely how the graph is connected and how nodes can influence each other”. The values of dV and dU are identical for “generic” networks, but are different on certain “exceptional” networks [Nowotny 88].
Example12.1 Turning momentarily to Euclidean geometry in , for a circle with center x and radius r the quantity analogous to M(n, r) is area(x, r) = πr2. We have . The quantity analogous to is area(r) −area(r − 𝜖), where 𝜖 is a small positive number. Since area(r) −area(r − 𝜖) ≈ 2πr𝜖, we have , so this limit and differ by 1, as reflected by the constant 1 added in (12.24). □
Example12.2 Let be the infinite graph whose nodes lie on a straight line at locations ⋯ , − 3, − 2, − 1, 0, 1, 2, 3, ⋯ . Then M(n, r) = 2r + 1 for each n and r, so . As for the surface dimension, we have for each n and r, so . □
Example12.3 Let be the infinite 2-dimensional rectilinear lattice whose nodes have integer coordinates, and where node (i, j) is adjacent to (i − 1, j), (i + 1, j), (i, j − 1), and (i, j + 1). Using the L1 (i.e., Manhattan) metric, the distance from the origin to node n = (n1, n2) is |n1| + |n2|. For integer r ≥ 1, the number of nodes at a distance r from a given node n is 4r [Shanker 07b], so by (12.20 ) we have
Hence
□
Exercise12.2 Let be an infinite rooted tree
such that each node spawns k child nodes. Calculate dV(n) and dU(n). □
A construction and corresponding analysis in [Nowotny 88] generates an infinite network for which dV exists but dU does not exist. Thus the existence of the volume dimension dV, and its numerical value, do not provide much information about the behavior of , although the inequality
is valid for all n. On the other hand, the following result proves that the existence of the surface dimension at n does imply the existence of the volume dimension at n, and the equality of these values.
Theorem12.1 ( [Nowotny 88], Lemma 4.11) If dU(n) exists and if dU(n) > 1, then dV(n) exists and dV(n) = dU(n).
Proof Let d = dU(n), and let 𝜖 > 0 satisfy d − 1 > 𝜖. Since dU(n) exists, then there exists a value such that for we have
Defining
for we have
(12.26)
(12.27)
Now we provide a lower bound for the sum on the left-hand side and an upper bound for the sum on the right-hand side by replacing them with integrals.
We now have
which implies
Since the arguments of the second logarithm on each side are uniformly bounded for each r, and since
we can find a value , where , such that whenever we have
and
Therefore for we have
which establishes the result. □
The following lemma shows that, under a suitable condition, the convergence of a subsequence of implies convergence of the entire sequence.
Lemma12.1 ( [Nowotny 88], Lemma 4.9) Suppose that for some node and for some number c ∈ (0, 1) the subsequence {rm} satisfies the condition rm∕rm+1 ≥ c for all sufficiently large m. Then
(12.28)
and similarly for .
Proof Let and c ∈ (0, 1) satisfy the conditions of the lemma. Let r be an arbitrary positive integer, and let m be such that rm ≤ r ≤ rm+1. Then . Since rm∕rm+1 ≥ c then (r∕rm) ≤ (rm+1∕rm) ≤ 1∕c and (r∕rm+1) ≥ (rm∕rm+1) ≥ c. Hence
The same reasoning proves that
□
The following theorem shows that the volume dimension is “pretty much stable under any local changes” [Nowotny 88].
Theorem12.2 ( [Nowotny 88], Lemma 4.10) Let n be a node in and let h be a positive number. Then the insertion of arcs between arbitrarily many pairs of nodes (x, y), subject to the constraint that dist(x, y) ≤ h, does not change and .
Proof Let be the original network, with distances dist(x, y) between nodes x and y. Let be the modified network with arcs between arbitrarily many node pairs (x, y) such that dist(x, y) ≤ h. Let dist+(x, y) be the distance in between nodes x and y. Define . If for some we have dist(n, x) = ⌊r∕h⌋, then the extra arcs in imply dist+(n, x) ≤⌊r∕h⌋. Thus
(12.29)
On the other hand, suppose that for some we have dist+(n, x) = ⌊r∕h⌋. Then dist(n, x) ≤ h⌊r∕h⌋, since the extra arcs in mean that a single arc in on the shortest path between nodes n and x corresponds to at most h arcs in on the shortest path between n and x. Since h⌊r∕h⌋≤ r we obtain
For all sufficiently large r we have ⌊r∕h⌋ > r∕2h and thus
where the final equality follows from Lemma 12.1. The proof for is the same. □
Exercise12.3 How can we reconcile Theorem 12.2 with the Watts–Strogatz results (Sect. 2.2) on the impact of adding some shortcut arcs to a network? □
12.4 Conical Graphs with Prescribed Dimension
Let d be an arbitrary number such that 1 < d ≤ 2. A method to construct a conical graph such that dV(n⋆) = d, where n⋆ is a specially chosen node, was provided in [Nowotny 88]. Graphs with higher volume dimension can be constructed using a nearly identical method. The construction is illustrated in Fig. 12.7 for the choice d = 5∕3. There are two types of nodes in this figure, small black circles and larger blue squares with a black border. Both types of nodes are nodes in , but we distinguish these two types to facilitate calculating dV(n⋆). From the figure we see that boxes connect level m to level m + 1, where the lower left and lower right corner of each box is a blue square, and the upper left and upper right corner of each box is a black circle. (These boxes have nothing to do with the boxes used to compute the box counting dimension.) For m ≥ 2, there are ⌊(2m − 1)d−1⌋ boxes connecting level m to level m + 1, so the number of blue squares at level m is 1 + ⌊(2m − 1)d−1⌋. Each blue square at level m has distance 2m − 1 to node n⋆. Thus, for m ≥ 2, the set of blue squares at level m is the set , and
To compute dV(n⋆), we first determine dU(n⋆), the surface dimension at n⋆. Since
then
Since the first and third limits are both equal to d − 1, then so is the second limit. That is,
(12.31)
At this point we have shown that the limit exists along the subsequence of distances , where rm ≡ 2m − 1. Since convergence of a subsequence does not imply convergence of the entire sequence, it must be shown that the limit exists for all sufficiently large distances, not just along a subsequence of distances. We require a lemma.
Figure 12.7
Construction of a conical graph for which dV(n⋆) = 5∕3
Lemma12.2 ( [Nowotny 88], Lemma 4.8) If for some we have dist(x, y) < ∞, then and .
Proof Let h = dist(x, y). If for some node z we have dist(y, z) ≤ r − h, then dist(x, z) ≤dist(x, y) + dist(y, z) ≤ h + (r − h) = r, so . Similarly, if for some node z we have dist(x, z) ≤ r, then dist(y, z) ≤dist(y, x) + dist(x, z) ≤ h + r, so . Hence
so . Similarly, we obtain . □
Theorem12.3 ( [Nowotny 88], Section 5.1) For the above conical graph construction, we have dV = d.
Proof For the subsequence rm = 2m − 1 the conditions of Lemma 12.1 are satisfied with c = 1∕2, since rm∕rm+1 = (2m − 1)∕(2m + 1) ≥ 1∕2 for m ≥ 2. By (12.31) and Lemma 12.1 we have
(12.32)
so by Definition 12.4 we have dU(n⋆) = d − 1. Since d > 1 it follows from Theorem 12.1 that dV(n⋆) = d. Since the distance between each pair of nodes in the graph is finite, it follows from Lemma 12.2 that for each node n we have dV(n) = d. Thus dV = d. □
Exercise12.4 Draw the conical graph for d = 1.01 and for d = 1.99. □
Connections between the above results and some results by Gromov on geometric group theory were explored in [Requardt 06]. The application of these techniques provides preliminary answers to such questions as characterizing the set of graphs having the same dimension.
12.5 Dimension of a Percolating Network
A percolating network (also called a percolation network) is a fundamental model for describing geometrical features of random systems. There are two main types of percolating networks: site and bond. Consider an infinite lattice in . The lattice need not be rectilinear; other possible geometries are triangle, honeycomb, and diamond. In a site-percolating network (SPN), each lattice point (called a site) is occupied at random with probability p. A site is not a node; a site is a position on a lattice, which may be occupied (populated) or empty. Two sites are connected by a bond (i.e., an arc) if they are adjacent along a principal direction (e.g., for a rectilinear lattice in , if they are adjacent along the x, y, or z axis.). In a bond-percolating network (BPN), all lattice sites are populated, and bonds between adjacent sites are occupied randomly with probability p. The occupation probability p for sites in a SPN, or for bonds in a BPN, has the same role as the temperature in thermal critical phenomena [Nakayama 94].
As p increases from 0, connected clusters of sites form. As p continues to increase, a critical pc value is reached. For p < pc, only finite clusters (i.e., clusters of finite size) exist. For p > pc, an infinite cluster (i.e., a cluster of infinite size) is present, as well as finite clusters. The critical value pc for the BPN is always smaller than the critical value for the SPN, due to the difference in the short range geometrical structure of the two types of lattices: a bond has more nearest neighbors than a site. For example, for a 2-dimensional rectilinear lattice, a given site has 4 adjacent sites, while a given bond is attached to 6 other bonds (3 at each end).
A 1976 paper of de Gennes was credited by Nakayama et al. [Nakayama 94] with sparking interest in the dynamics of a percolating network. That paper posed the following question: an ant parachutes down onto a site on the percolating network and executes a random walk. What is the mean square distance the ant traverses as a function of time?5 Stanley, in 1977, was the first to recognize that percolating networks exhibit self-similarity and could be characterized by a fractal dimension. Percolating networks have been used to describe a large variety of physical and chemical phenomena, such as gelation processes and conduction in semiconductors. Percolation theory also forms the basis for studies of the flow of liquids or gases through porous media, and has been applied to the stability of computer networks under random failures and under denial of service attacks, to the spread of viruses in computer networks and epidemics in populations, and to the immunization of these networks [Cohen 04].
Let m denote the size of a cluster, i.e., the number of occupied sites in the cluster. (Here “m” suggests “mass”.) Although m depends on p, for brevity we write m rather than m(p). Let sm be the average Euclidean distance between two sites in a cluster of size m. At p = pc we have, for some dpn (where “pn” denotes “percolating network”),
The exponent dpn is the “fractal dimension” or “Hausdorff dimension” of the percolating network [Nakayama 94]. For a percolating network in , for E = 2 we have dpn = 91∕48, for E = 4 we have dpn ≈ 3.2, for E = 5 we have dpn ≈ 3.7, and for E ≥ 6 we have dpn = 4. For E = 3, the computed values of dpn, as reported in [Nakayama 94], span the range 0.875–2.48.
The chemical distance between two sites u and v in a percolating network is the minimal number of hops between u and v, subject to the constraint that each hop must be an existing bond between connected sites. The chemical dimension of a percolating network is dch if the mass m within chemical distance sc obeys the scaling
For 2-dimensional percolating networks, dch ≈ 1.768; for 3-dimensional percolating networks, dch ≈ 1.885 [Nakayama 94]. Finally, let chem(L) denote the chemical distance between two sites that are separated by the Euclidean distance L. Then
so dpn∕dch is the fractal dimension that relates the chemical distance between two sites in a lattice to the Euclidean distance between two sites [Nakayama 94]. The fractal dimensions of percolating networks were also studied in [Cohen 04, Cohen 08].