Prologue …Thus logics and mathematics in the central nervous system, when viewed as languages, must structurally be essentially different from those languages to which our common experience refers. … whatever the system is, it cannot fail to differ considerably from what we consciously and explicitly considered as mathematics, von Neumann [1].
Von Neumann emphasized the need for new mathematical disciplines in order to understand and interpret the language of the brain [1]. He has indicated the potential directions of such new mathematics through his work on self-reproducing cellular automata and finite mathematics. Owing to his early death in 1957, he was unable to participate in the development of relevant new theories, including morphogenesis pioneered by Turing [2] and summarized by Katchalsky [3]. Prigogine [4] developed this approach by modeling the emergence of structure in open chemical systems operating far from thermodynamic equilibrium. He called the patterns dissipative structures, because they emerged by the interactions of particles feeding on energy with local reduction in entropy. Haken [5] developed the field of synergetics, which is the study in lasers of pattern formation by particles, whose interactions create order parameters by which the particles govern themselves in circular causality.
Principles of self-organization and metastability have been introduced to model cognition and brain dynamics [6–8]. Recently, the concept of self-organized criticality (SOC) has captured the attention of neuroscientists [9]. There is ample empirical evidence of the cortex conforming to the self-stabilized, scale-free dynamics of the sandpile during the existence of quasi-stable states [10–12]. However, the model cannot produce the emergence of orderly patterns within a domain of criticality [13, 14]. In [15] SOC is described as “pseudocritical” and it is suggested that self-organization be complemented with more elaborate, adaptive approaches.
Extending on previous studies, we propose to treat cortices as dissipative thermodynamic systems that by homeostasis hold themselves near a critical level of activity far from equilibrium but steady state, a pseudoequilibrium. We utilize Freeman's neuroscience insights manifested in the hierarchical brain model: the K (Katchalsky)-sets [3, 16]. Originally, K-sets have been described mathematically using ordinary differential equations (ODEs) with distributed parameters and with stochastic differential equations (SDEs) using additive and multiplicative noise [17, 18]. This approach has produced significant results, but certain shortcomings have been pointed out as well [19–21]. Calculating stable solutions for large matrices of nonlinear ODE and SDE closely corresponding to chaotic electrocorticography (ECoG) activity are prohibitively difficult and time consuming on both digital and analog platforms [22]. In addition, there are unsolved theoretical issues in constructing solid mathematics with which to bridge the difference across spatial and temporal scales between microscopic properties of single neurons and macroscopic properties of vast populations of neurons [23, 24].
In the past decade, the neuropercolation approach has proved to be an efficient tool to address these shortcomings by implementing K-sets using concepts of discrete mathematics and random graph theory (RGT) [25]. Neuropercolation is a thermodynamics-based random cellular neural network model, which is closely related to cellular automata (CA), the field pioneered by von Neumann, who anticipated the significance of CA in the context of brain-like computing [26]. This study is based on applying neuropercolation to systematically implement Freeman's principles of neurodynamics. Brain dynamics is viewed as a sequence of intermittent phase transitions in an open system, with synchronization–desynchronization effects demonstrating symmetry breaking demarcated by spatiotemporal singularities [27–30].
This work starts with the description of the basic building blocks of neurodynamics. Next, we develop a hierarchy on neuropercolation models with increasing complexity in structure and dynamics. Finally, we employ neuropercolation to describe critical behavior of the brain and to interpret experimentally observed ECoG/EEG (electroencephalography) dynamics manifesting learning and higher cognitive functions. We conclude that criticality is a key aspect of the operation of the brain and it is a basic attribute of intelligence in animals and in man-made devices.
We propose a hierarchical approach to spatiotemporal neurodynamics, based on K-sets. Low-level K-sets were introduced by Freeman in the 1970s, named in honor of Aharon Kachalsky, an early pioneer of neural dynamics [16]. K-sets are multiscale models, describing the increasing complexity of structure and dynamical behaviors. K-sets are mesoscopic models, and represent an intermediate level between microscopic neurons and macroscopic brain structures. The basic K0 set describes the dynamics of a cortical microcolumn with about neurons. K-sets are topological specifications of the hierarchy of connectivity in neuron populations. When first introduced, K-sets were modeled using a system of nonlinear ODEs; see [16, 21]. K-dynamics predict the oscillatory waveforms that are generated by neural populations. K-sets describe the spatial patterns of phase and amplitude of the oscillations. They model observable fields of neural activity comprising EEG, local field potential (LFP), and magnetoencephalography (MEG). K-sets form a hierarchy for cell assemblies with the following components[31]:
K-sets are complex dynamical systems modeling the classification in various cortical areas, having typically hundreds or thousands of degrees of freedom. KIII sets have been applied to solve various classification and pattern recognition problems [21, 32, 33]. In early applications, KIII sets exhibited extreme sensitivity to model parameters, which prevented their broad use in practice [32]. In the past decade, systematic analysis has identified regions of robust performance and stability properties of K-sets have been derived [34, 35]. Today, K-sets are used in a wide range of applications, including detection of chemicals [36], classification [32, 37], time series prediction [38], and robot navigation [39, 40].
The hierarchical K-model-based approach is summarized in the 10 Building Blocks of Neurodynamics; see [16, 23, 24]:
Principles 1 through 7 describe the steps toward basic sensory processing, including pattern recognition, classification, and prediction, which is the function of KIII models. Principles 8 and 9 reflect the generation of basic intentionality using KIV sets. Principle 10 expresses the route to high-level intentionality and ultimately consciousness. The greatest challenge in modeling cortical dynamics is posed by the requirement to meet two seemingly irreconcilable requirements. One is to model the specificity of neural action even to the level that a single neuron can be shown to have the possibility of capturing brain output. The other is to model the generality by which neural activity is synchronized and coordinated throughout the brain during intentional behavior. Various sensory cortices exhibit great similarity in their temporal dynamics, including the presence of spontaneous background activity, power-law distribution of spatial and temporal power spectral densities, repeated formation of AM spatial patterns with carrier frequencies in the beta and gamma ranges, and frame recurrence rates in the theta range.
Models based on ODE and SDE have been used successfully to describe the mesoscopic dynamics of cortical populations for autonomous robotics [40, 41]. However, such approaches suffered from inherent shortcomings. They falter in attempts to model the entire temporal and spatial range including the transitions between levels, which appear to take place very near to criticality. Neuropercolation and RGT offer a new approach to describe critical behavior and related phase transitions in the brain. It is shown that criticality in the neuropil is characterized by a critical region instead of a singular critical point, and the trajectory of the brain as a dynamical systems crosses the critical region from a less organized (gaseous) phase to more organized (liquid) phase during input-induced destabilization and vice versa [42–44]. Neuropercolation is able to simulate these important results from mesoscopic ECoG/EEG recording across large spacial and temporal scales, as is introduced in this chapter.
We utilize the powerful tools of RGT developed over the past 50 years [45–47] to establish rigorous models of brain networks. Our model incorporates CA and percolation theory in random graphs that are structured in accordance with cortical architectures. We use the hierarchy of interactive populations in networks as developed in Freeman K-models [16], but replace differential equations with probability distributions from the observed random graph that evolve in time. The corresponding mathematical object is called neuropercolation [25]. Neuropercolation theory provides a suitable mathematical approach to describe phase transitions and critical phenomena in large-scale networks. It has potential advantage compared to ODEs and partial differential equations (PDEs) when modeling spatiotemporal transitions in the cortex, as differential equations assume some degree of smoothness in the described phenomena, which may not be very suitable to model the observed sudden changes in neurodynamics. The neural percolation model is a natural mathematical domain for modeling collective properties of brain networks, especially near critical states, when the behavior of the system changes abruptly with the variation of some control parameters.
Neuropercolation considers populations of cortical neurons that sustain their metastable state by mutual excitation, and its stability is guaranteed by the neural refractory periods. Nuclear physicists have used the concept of criticality to denote the threshold for ignition of a sustained nuclear chain reaction, that is, fission. The critical state of nuclear chain reaction is achieved by a delicate balance between the material composition of the reactor and its geometrical properties. The criticality condition is expressed as the identity of geometrical curvature (buckling) and material curvature. Chain reactions in nuclear processes are designed to satisfy strong linear operational regime conditions, in order to assure stability of the underlying chain reaction. That usage fails to include the self-regulatory processes in systems with nonlinear homeostatic feedback that characterize cerebral cortices.
A key question is how the cortex transits between gas-like randomness and liquid-like order near the critical state. We have developed thermodynamic models of the cortex [43, 44], which postulate two phases of neural activity: vapor-like and liquid-like. In the vapor-like phase, the neurons are uncoupled and maximally individuated, which is the optimal condition for processing microscopic sensory information at low density. In the liquid-like phase, the neurons are strongly coupled and thereby locked into briefly stable macroscopic activity patterns at high density, such that every neuron transmits to and receives from all other neurons by virtue of small-world effects [48, 49]. Local 1/f fluctuations have the form of phase modulation (PM) patterns that resemble droplets in vapor. Large-scale, spatially coherent AM patterns emerge from and dissolve into this random background activity but only on receiving conditioned stimulus (CS). They do so by spontaneous symmetry breaking [42] in a phase transition that resembles condensation of a raindrop, in that it requires a large distribution of components, a source of transition energy, a singularity in the dynamics, and a connectivity that can sustain interaction over relatively immense correlation distances. We conclude that the background activity at the pseudoequilibrium state conforms to SOC, that the fractal distribution of phase patterns corresponds to that of avalanches, that the formation of a perception from sensory input is by a phase transition from a gas-like, disorganized, low-density phase to a liquid-like high-density phase [43].
In large networks, such as the cortex, organized dynamics emerges from the noisy and chaotic behavior of a large number of microscopic components. Such systems can be modeled as graphs, in which neurons become vertices. The activity of every vertex evolves in time depending on its own state, the states of its neighbors, and possibly some random influence. This leads us to the general formulation of random CA. In a basic two-state cellular automaton, the state of any lattice point is either active or inactive. The lattice is initialized with some (deterministic or random) configuration. The states of the lattice points are updated (usually synchronously) on the basis of some (deterministic or probabilistic) rule that depends on the activations of their neighborhood. For related general concepts, see CA such as Conway's Game of Life, cellular neural network, as well as thermodynamic systems like the Ising model [50–53]. Neuropercolation models develop neurobiologically motivated generalizations of CA models and incorporate the following major conditions:
A vertex of a graph is in one of two states, , and influenced via the edges by neighbors. An edge from to , , either excites and inhibits. Excitatory edges project the states of neighbors and inhibitory edges project the opposite states of neighbors, 0 if the neighbor's state is 1, and 1 if it is 0. The state of the vertex, influenced by edges, is determined by the majority rule; the more neighbors are active, the higher the chance for the vertex to be active; and the more neighbors are inactive, the higher the chance for the vertex to be inactive. At time is randomly set to or . Then, for , a majority rule is applied simultaneously over all vertices. A vertex is influenced by a state of its neighbor , whenever a random variable is less than the influencing excitatory edge strength, , else the vertex is influenced by an opposite state of neighbor . If the edge is inhibitory, the vertex sends 0 when , and 1 when . Then, a vertex gets a state of the most common influence, if there is one such; otherwise, a vertex state is randomly set to 0 or 1, Figure 7.1a.
Formula for the majority rule:
Lattice-like graphs are built by randomly reconnecting some edges in the graphs set on a regular two-dimensional grid in and folded into tori. In the applied basic random rewiring strategy, directed edges from vertices are plugged out from a graph randomly. At that point, the graph has vertices lacking an incoming edge and vertices lacking an outgoing edge. To preserve the vertex degrees of the original graph, the set of plugged-out edges is returned to the vertices with the missing edges in random order. An edge is pointed to the vertex missing the incoming edge and is projected from the vertex missing the outgoing edge, Figure 7.1b. Such a two-dimensional graph models a KI set with a homogeneous population of excitatory and inhibitory neurons.
It will be useful to reformulate the basic subgraph on a regular grid, as shown in Figure 7.2. We list all vertices along a circle and connect the corresponding vertices to obtain the circular representation on Figure 7.2b. This circle is unfolded into a linear band by cutting it up at one location, as shown in Figure 7.2c. Such a representation is very convenient for the design of the rewiring algorithm. Note that this circular representation is not completely identical to the original lattice, owing to the different edges when folding the lattice into a torus. We have conducted experiments to compare the two representations and obtained that the difference is minor and does not affect the basic conclusions. An important advantage of this band representation that it can be generalized to higher dimensions by simply adding suitable new edges between the vertices. For example, in 2D we have four direct neighbors and thus four connecting edges; in 3D, we have six connecting edges, and so on. Generalization to higher dimensions is not used in this work as the 2D model is a reasonable approximation of the layered cortex.
We construct a double-layered graph by connecting two lattices (see Figure 7.3(a)). The top layer is excitatory, while the bottom layer is inhibitory. The excitatory subgraph projects to the inhibitory subgraph through edges that excite, while the inhibitory layer projects toward with edges that inhibit. An excitatory edge from influences the vertex of with 1 when active and with 0 when inactive. Conversely, an inhibitory edge from influences the vertex of with 0 when active and with 1 when inactive.
By coupling two double-layered graphs, we have four subgraphs interconnected to form a graph . Subgraph and are coupled into one double layer and and into the other. There are several excitatory edges between and , and , and , and , respectively, which are responsible for coupling the two double-layered lattices (Figure 7.3(b)).
The most fundamental parameter describing the state of a finite graph at time is its overall activation level defined as follows:
It is useful to introduce the normalized activation per vertex, , which is zero if all sites are inactive and 1 if all sites are active. In general, is between 0 and 1:
The average normalized activation over time interval is given as
The latter quantity can be interpreted as the average magnetization in terms of statistical physics. Depending on their structure, connectivity, update rules, and initial conditions, probabilistic automata can display very complex behavior as they evolve in time. Their dynamics may converge to a fixed point, to limit cycles, or to chaotic attractors. In some specific models, it is rigorously proved that the system is bistable and exhibits critical behavior. For instance, it spends a long time in either low- or high-density configurations, with mostly 0 or 1 activation values at the nodes, before a very rapid transition to the other state [54, 55]. In some cases, the existence of phase transitions can be shown mathematically. For example, the magnitude of the probabilistic component of the random majority update rule in Figure 7.1a is shown to have a critical value, which separates the region with a stable fixed point from the bistable regime [25].
In the absence of exact mathematical proofs, Monte Carlo computer simulations and methods of statistical physics provide us with detailed information on the system dynamics. The critical state of the system is studied using the statistical moments. On the basis of the finite-size scaling theory, there are statistical measures that are invariant to system size at the state of criticality [56]. Evaluating these measures, the critical state can be found. It has been shown that in addition to the system noise, which acts as a temperature measure in the model, there are other suitable control parameters that can drive the system to critical regimes, including the extent of rewiring and the inhibition strength [57].
Denote this quantity as , which will be a control parameter; it can be the noise, rewiring, connection strength, or other suitable quantity. In this work, describes noise effects and is related to the system noise level . Distributions of and depend on . At a critical value of , the probability distribution functions of these quantities suddenly change. values are traditionally determined by evaluating the fourth-order moments (kurtosis, peakedness measure) , [56, 58–61], but they can also be measured using the third-order moments (skewness measure) or :
According to the finite-size scaling theory, for two structurally equivalent finite graphs and of different sizes, and for . However, for , , and . In this study, we use both the third- and fourth-order momentums to determine the critical points of the system.
Two-dimensional lattices folded into tori have been thoroughly analyzed in [55]; the main results are summarized here. In the absence of rewiring, the lattice is in one of two possible states: ferromagnetic or paramagnetic, depending on the noise strength . The following states are observed:
For , the vertices cease to act individually and become a part of a group. Their activity level is determined by the population. This regime is defined by the condition that the vertices project more common activation than they receive, in the probabilistic sense. This transition is the first building block of neurodynamics and it corresponds to KI level dynamics. Figure 7.4 illustrates critical behavior of a 2D torus with ; no rewiring. Calculations show that for the same torus having 5.00% edges rewired, the critical probability changes to . Rewiring leads to a behavior of paramount importance for future discussions: the critical condition is manifested as a critical region, instead of a singular critical point. We will show various manifestations of such principles in the brain. In the case of the rewiring discussed in this section, the emergence of a critical region is related to the fact that same amount of rewiring can be achieved in multiple ways. For example, it is possible to rewire all edges with equal probability, or select some vertices that are more likely rewired, and so on. In real life, as in the brain, all these conditions happen simultaneously. This behavior is illustrated in Figure 7.5 and it has been speculated as a key condition guiding the evolution of the neuropil in the infant's brain in the months following birth [54, 55, 62] giving rise to a robust, broadband background activity as a necessary condition of the formation of complex spatiotemporal patterns during cognition.
Inhibitory connections in the coupled subgraph can generate oscillations and multimodal activation states [57]. In an oscillator, there are three possible regimes demarcated by two critical points and :
Under appropriately selected graph parameters, distribution is unimodal when , bimodal when , and quadromodal when , Figure 7.6. Each subgraph has 2304 (5%) edges rewired within the layer and 1728 (3.75%) edges rewired toward the other subgraph. Just below , vertices may oscillate if they are excited. values of temporarily excited graph oscillate, but return to the basal level in the absence of excitation. This form of oscillation is the second building block of neuron-inspired dynamics. When is further increased, that is, , exhibits sustained oscillations even in the absence of external perturbation. When the double layer is temporarily excited, it returns to the basal oscillatory behavior. Self-sustained oscillation is the third building block of neuron-inspired dynamics and it corresponds to KII hierarchy.
Two coupled layers with inhibitory and excitatory nodes exhibit narrow-band activity oscillations. These coupled lattices with appropriate topology can model any frequency range with oscillations occurring between and . For example, an oscillator has 2.5% edges rewired across the two layers and 5% edges are rewired within each layer. has and , Figure 7.7b. Similarly, oscillator has and . Each of the coupled lattices in this example is of size . When coupling two such oscillators together, we have a total of four layers (tori) coupled into a unified oscillatory system. Clearly, the two oscillators coupled together may or may not agree on a common frequency. We will show that under specific conditions, various operational regimes may occur, including narrow band oscillations, broadband (chaotic) oscillations, intermittent oscillations, and various combinations of these. One of the multiple ways to couple two oscillators is to rewire excitatory edges between subgraphs , -, -, and -, respectively. Two coupled oscillators have additional behavioral regimes. When oscillators and are coupled with 0.625% edges into a new graph , is shifted to lower level, Figure 7.8. Every parameter describing a graph influences the critical behavior under appropriate conditions. If two connected oscillators with different frequencies cannot agree on a common mode, together they can generate large-scale synchronization or chaotic background activity.
In the graph made of two oscillators, there are four critical points ,separating five possible major operational regimes.
These regimes correspond to the fourth building block of neurodynamics, as described by KIII sets.
Finally, we study a system made of three coupled oscillators, each with its own inherent frequency. Such a system produces broadband oscillations with intermittent synchronization–desynchronization behavior. In order to analyze quantitatively the synchronization features of this complex system with over 60 000 nodes in six layers, we merge the channels into a single two-dimensional array at the readout. These channels will be used to measure the synchrony among the graph components. First, we find the dominant frequency of the entire graph using Fourier transform after ensemble averaging across the channels for the duration of the numeric experiment. We set a time window for the duration of two periods determined by the dominant frequency. We calculate the correlation between each channel and the ensemble average over this time window . At each time step, each channel is correlated with the ensemble over the window . Finally, we determine the dominant frequency of each channel and for the ensemble average at time and the phase shifts are calculated. The difference in phases of dominant frequencies measures the phase lag between the channel and the ensemble.
We plot the phase lags across time and space to visualize the synchrony between graph components (see Figure 7.9). We can observe intermittent synchronization spatiotemporal patterns as the function of the , which represents the noise component of the evolution rule. For low values, the phase pattern is unstructured (see Figure 7.9(a)). At a threshold value, intermittent synchronization occurs Figure 7.9b. Further increasing , we reach a regime with highly synchronized channels. Intermittent synchronization across large cortical areas has been observed in ECoG and EEG experiments as well [27, 30]. The correspondence between our theoretical/computational results and experimental data is encouraging. Further detailed studies are needed to identify the significance of the findings.
According to principles of neurodynamics, learning reinforces Hebbian cell assemblies, which respond o known stimuli [16, 23, 24] by destabilizing the broadband chaotic dynamics and lead to an attractor basin through a narrow-band oscillation using gamma band career frequency. To test this model, we implemented learning in the neuropercolation model. We use Hebbian learning, that is, the weight from node i to node j is incrementally increased if these two nodes have the same state or . The weight from i to j incrementally decreases if the two nodes have different activations or . Learning has been maintained during the 20-step periods when input was introduced.
In the basal mode without learning, the three double layers can generate broadband chaotic oscillations; see Figure 7.10a where the lower row is a zoomed-in version of the top row. The spikes in Figure 7.10a indicate the presence of input signals. Inputs have been implemented by flipping a small percentage of the input layer nodes to state “1” (active) for the duration of 20 iteration steps. During the learning stage, inputs are introduced 40 times (20 steps each), at regular intervals of 500 iteration steps. Without learning, the activity returns to the low-level chaotic state soon after the input ceases. Figure 7.10b shows that a narrow-band oscillation becomes prominent during learning, when a specific input is presented. After learning, the oscillatory behavior of the lattice dynamics is more prominent, even without input, but the learnt input elicits much more significant oscillations. This is the manifestation of the sixth and seventh principles of Freeman's neurodynamics, and it can be used to implement classification and control tasks using the neuropercolation model.
Further steps, toward the eighth and ninth principles, involve the analysis of the AM pattern, which is converted to a feature vector of dimension . Here, is the number of nodes in our lattice, which, after some course graining, corresponds to the EEG/ECoG electrodes. They provide our observation window into the operation of the cortex or cortex simulation. The feature vector is used to describe or label the cortical state at any moment. The AM pattern is formed by a low-dimensional, quasi-limit cycle attractor after synaptic changes with learning. The synaptic weights are described in a matrix, and the combination of different modalities to form Gestalts is done by concatenation of feature vectors. Research in this direction is in progress.
We employed the neuropercolation model to demonstrate basic principles of neurodynamics. Neuropercolation is a family of stochastic models based on the mathematical theory of probabilistic CA on lattices and random graphs and motivated by structural and dynamical properties of neural populations. The existence of phase transitions has been demonstrated in probabilistic CA and percolation models. Neuropercolation extends the concept of phase transitions to large interactive populations of nerve cells near criticality. Criticality in the neuropil is characterized by a critical region instead of a singular critical point. The trajectory of the brain as a dynamical system crosses the critical region from a less organized (gaseous) phase to a more organized (liquid) phase during input-induced destabilization and vice versa. Neuropercolation simulates these important results from mesoscopic ECoG/EEG recording across large spatial and temporal scales.
We showed that neuropercolation is able to naturally model input-induced destabilization of the cortex and the consequent phase transitions due to sensory input and learning effects. Broadband activity with scale-free power spectra is observed for unlearnt conditions. On the other hand, Hebbian learning results in the onset of narrow-band oscillations, indicating the selection of specific learnt inputs. These observations demonstrate the operation of the first seven building blocks of neurodynamics. Our results indicate that the introduced Hebbian learning effect can be used to identify and classify inputs. Considering that our model has six coupled lattices, each with up to 10 000 nodes, massive computational resources have been involved in the simulations. Parallel chip implementation may be a natural way to expand the research in the future, especially when extending the work for multisensory Gestalt formation in the cortex.