What does it mean to be in a resting state? This is one of the central questions in neuroscience where the brain’s spontaneous activity has been the subject of intense debate (see, for instance, Cabral et al., 2013; Northoff, 2014a,b; Raichle, 2001, 2010; Shulman et al., 2014). This question has also become a subject for philosophy (see the recent excellent paper by Klein, 2014). Because its exact meaning, role, and purpose all remain unclear, the brain’s spontaneous activity is typically defined in purely operational terms by the absence of specific external stimuli (see Logothetis et al., 2009). The brain’s spontaneous activity, then, often acts as a baseline, especially in functional brain imaging (such as fMRI) (see Klein, 2014; Morcom & Fletcher, 2007a,b). In other words, the brain’s spontaneous activity serves as a reference for determining the contours of task-evoked or stimulus-induced activity (these terms are used interchangeably throughout this volume). Whether it is feasible for the spontaneous to serve as such a baseline has been debated in both neuroscience (Morcom & Fletcher, 2007a,b) and philosophy (Klein, 2014).
One reason for doubt about the use of spontaneous activity as a reference for task-evoked activity is the dynamic character of spontaneous activity. Specifically, the way that spontaneous activity appears to change in ways that trace back to task-evoked activity calls into question the use of the former as a way of demarcating the latter. The stimuli or task may impact spontaneous activity by changing its level, degree of functional connectivity, or variability, which has been called stimulus-rest interaction (see Northoff et al., 2010; Schneider et al., 2008). Additionally, data for the converse scenario, the spontaneous activity impacting the subsequent stimulus-induced or task-evoked activity, that is, rest–stimulus interaction, has been reported (see He, 2013; Northoff et al., 2010; Sadaghinai, Hesselmann, et al., 2010).
The two kinds of interaction between spontaneous and stimulus-induced activity just discussed provide reason for skepticism regarding whether the spontaneous can serve as absolute and independent reference for stimulus-induced activity. The apparent impossibility of a clean-cut segregation of spontaneous and stimulus-induced activity suggests that shifting focus to their relation could be heuristically valuable. Klein (2014), for instance, suggested that the two types of neural activity involve different temporal dimensions: the spontaneous activity can operate on long-term time scales across hours, days, and months if not years, whereas the stimulus-induced activity is limited to the very short-term time scales in which particular stimuli are processed. This is a promising hypothesis, but Klein does not explain the exact nature of their relation, that is, how long- and short-time scales interact and are integrated with each other.
I suggest that there are at least two plausible ways that spontaneous and task-evoked activity could be related to one another. It could be that they operate in parallel or that they interact with one another. Parallelism is the view that spontaneous and task-evoked activities are decidedly independent neural phenomena. An interactionist view, in contrast, claims either that stimulus-induced activity is unilaterally dependent on the spontaneous or that there is a mutual dependence between them.
Importantly, one could distinguish between strong and weak forms of both parallelism and interactionism. Strong parallelism would not allow for any kind of relation such as spatial or temporal overlap between spontaneous and task-evoked activity. In contrast, weak parallelism may posit a spatial or temporal overlap between spontaneous and task-evoked activity but without the latter changing the ongoing levels or features of the former, or vice versa. In other words, weak parallelism posits independence in the levels (and features) of both forms of activity: spontaneous activity remains the same irrespective of whether task-evoked activity is present or not, and task-evoked activity remains the same independent of the level of spontaneous activity.
There could also be weak and strong forms of interactionism. Weak interactionism could be signified by a relation such as additive interaction, that is, mere superposition between spontaneous and task-evoked activity without mutual change. Taken in this sense, weak interactionism may overlap to a significant degree with weak parallelism. For this reason I will focus my discussion only on the former (while neglecting the latter). Strong interactionism, in contrast, does not only allow for mere superposition like spatiotemporal overlap and additive interaction but goes further by postulating reciprocal dependence and change in the levels of spontaneous and task-evoked activity.
Focusing on the discussion of parallelism versus interactionism, the present chapter can be regarded as an extension and specification of the first chapter. There I endorsed a spectrum model of the brain that considers stimulus-induced activity to result from a continuum or balance between spontaneous activity and stimuli. As pointed out, this presupposes direct interaction and reciprocal modulation between spontaneous activity and stimuli. The exact nature of this interaction, however, was left open. Clarifying the character of the interaction between spontaneous and stimulus-induced activity as well as their underlying mechanisms and principles is the overarching goal of the present chapter.
The aims of this chapter are to discuss these two models, parallelism and interactionism and to provide arguments for and against each based on available empirical data and theoretical accounts of scientific reasoning. The first part focuses on parallelism in its strong form, whereas the second part investigates interactionism in both forms, weak and strong. To do this, I must avoid worries about how exactly to determine what counts as spontaneous activity. The empirical data that are most probative with respect to the relation between spontaneous and task-invoked activity are concerned purely with neural activity.
I therefore need to clarify a number of potentially important ways in which the pertinent phenomena can be characterized. For discussion about the viability of studying spontaneous activity in metabolic, biochemical, spatial, temporal, or psychological terms, see Northoff (2014a). Focusing on the purely neuronal level allows me to target the relation between spontaneous and stimulus-induced activity as indexed by spatial (i.e., functional connectivity) and temporal (i.e., fluctuations in different frequency ranges) measures.
In addition to empirical evidence, I also discuss theoretical evidence as stemming from philosophy of science (third section in this chapter: “Fundamental Principle of Brain Activity—Difference-Based Coding”). Relying on the philosopher of science R. Giere and his concept of fundamental principle, I propose that a particular coding strategy by the brain, namely difference-based coding, allows for interaction between spontaneous activity and stimuli. Therefore, I argue that difference-based coding can be regarded as a fundamental principle (or bridge principle) in the sense of Giere.
Before we go ahead, we should make a couple of clarifications. The concept of spontaneous activity is usually understood in an operational sense that denotes a behavioral state—both eyes closed and eyes open with a visual fixation cross are common examples used in neuroimaging (Logothetis et al., 2009; Northoff, 2014a,b; Raichle, 2015a,b). Psychologically, the spontaneous activity may be characterized by mind wandering, random thoughts, or stimulus-unrelated thoughts (Fox et al., 2015; Smallwood & Schooler, 2015). In contrast, I use the concept of spontaneous activity to refer to neuronal activity irrespective of any operational, psychological, or behavioral concerns (Raichle, 2015a,b). It is the spontaneous activity of the brain that Raichle termed “default-mode function” of the brain (Buckner et al., 2008; Llinás, 2001; Northoff, 2014a,b; Raichle, 2009; Raichle et al., 2001); it is this sense of the term “spontaneous activity” that I presuppose here.
It is also worth noting that one might judge the claim about strong interaction between spontaneous and stimulus-induced activity to be almost trivially true. The baseline state against which deviations are measured, whether a true neuronal resting state or a particular cognitive state can be assumed to impact the subsequent processing of and behavior resulting from any stimuli or task. This means that interactionism is almost trivially true, which could be interpreted to mean that there is nothing special about the brain’s spontaneous activity.
However, the main focus here is not so much on the psychological and behavioral implications but exclusively on the neuronal mechanisms underlying such interaction. Rather than focusing on behavioral or psychological states, I exclusively focus on the neuronal mechanisms that underlie interaction on the behavioral and psychological level. For that purpose I discuss different neural models of interaction between spontaneous activity and stimulus-induced activity.
One of those models, specifically, the parallel model may be considered a straw man from an empirical perspective given the observed behavioral and psychological interactions. However, taken in a purely logical context in terms of conceivability, parallelism nevertheless must be considered an option; my aim is thus to refute such parallelism on empirical grounds and show that it is simply not in accordance with the empirical data. It is useful to learn what it is about the design of the brain that accounts for strong parallelism not being a tenable option from an empirical perspective.
Roughly, parallelism is the view that spontaneous activity and stimulus-induced activity operate in parallel without any direct interaction. In order for empirical evidence about neural activity to be relevant to this claim, it needs to be made more precise. One way to do this is to consider parallelism to entail that spontaneous activity and task-evoked activity are neurally segregated from one another. This segregation could occur in one of two ways. Spontaneous activity and task-evoked activity might be spatially segregated, in which case they would transpire in distinct neuronal systems, or they could be temporally segregated, in which case they would be constituted respectively by forms of neural activity that have distinct profiles in terms of amplitude fluctuation across different frequency ranges.
The default-mode network (DMN) includes medial regions in the brain such as the anterior and posterior cingulate cortex and the medial prefrontal cortex as well as the inferior parietal cortex. The DMN has gotten its name due to its high levels of spontaneous activity (Buckner et al., 2008; Raichle, 2015a,b; Raichle et al., 2001) and is contrasted with other neural regions/networks such as the sensory or lateral prefrontal cortices and their respective, sensorimotor and control-executive networks (SMN, CEN). According to this rubric, regions outside the DMN are not related to spontaneous activity. This view ascribes spatial segregation to spontaneous activity and stimulus-induced activity because each is taken to transpire in distinct neural neighborhoods. Klein (2014) describes this as the “standard thesis,” which I rephrase as the “standard view.”
Still, there are reasons to question the significance of the data in support of these claims for spatial segregation. Already in some of the early work on spontaneous activity, Simpson, Drevets, Snyder, Gusnard, and Raichle (2001) and Gusnard and Raichle (2001), showed that the DMN underwent deactivation during task-evoked activity whenever the task involved either self-referential (personally relevant stimuli such as the subject’s own name) or cognitive-attentional elements. Such deactivation indicates responsiveness to the stimulus and can therefore be taken as evidence for the claim that the DMN can in fact be operative during stimulus-induced or task-evoked activity, even if it provides little indication as to what sorts of operations it is performing.
In addition, functional connectivity within the DMN (roughly, the degree to which activity changes across time in different parts of the DMN can be said to correlate with one another) has been shown to change during exposure to tasks or stimuli. This phenomenon has been described as “background functional connectivity” (Smith et al., 2009) and serves as strong indication that spontaneous activity in the DMN is preserved during and, at the same time, modulated by stimulus-induced or task-evoked activity. If the two were entirely independent, one would expect that task-invoked activity would fail to disturb the DMN. Unless one maintains that changes in the DMN’s functional connectivity during the performance of tasks constitute a coincidence, this finding is reason to doubt the parallelism thesis even if one accepts the spatial segregation hypothesis of the “standard view.”
Additionally, the parallelist view of resting state/task-evoked activity can be undermined by resisting the spatial segregation hypothesis. If it is the case that spontaneous activity occurs outside the DMN, then it would no longer be viable to endorse parallelism on the basis of claims about spatial segregation between spontaneous activity and task-evoked activity. A number of studies have begun to illuminate the presence of spontaneous activity in neural regions outside the DMN. In fact, it has been shown that regions of the brain often thought to be dedicated to stimulus-induced and task-evoked activity, like the CEN and SMN, can themselves be characterized as involving spontaneous activity (see Klein 2014; Northoff, 2014a; Shulman et al., 2014).
Note however that the findings considered so far just speak against spatial parallelism in particular but not parallelism in general. There could still be parallel processing between spontaneous activity and stimulus-induced activity within one and the same region or network. It is conceivable that both forms of activity occur in various regions/networks but that they remain completely independent of each other in each region. In order to evaluate the prospects for this form of parallelism, the temporal features of spontaneous activity and task-invoked activity need to be investigated.
Parallelism is not necessarily ruled out by the above arguments against the spatial segregation hypothesis. If it could be shown that spontaneous activity and task-evoked activity are constituted by fluctuations in completely different frequency ranges, parallelism might still be vindicated. For instance, it could be that infraslow frequency fluctuations occur only in spontaneous activity, whereas higher frequency fluctuations only occur during stimulus-induced activity. This would provide some evidence for parallelism.
To evaluate the hypothesis that spontaneous activity and task-evoked activity are temporally segregated along these lines, a brief recap of the temporal features of neural activity is needed. The brain’s neural activity can be characterized by fluctuations in different frequency ranges. Infraslow frequency fluctuations are in the range between 0.001 to 0.1 Hz (as measured with fMRI) and are complemented by slow (0.01 to 4 Hz: slow and delta) and faster frequency ranges between 5 and 8 Hz (theta), 8–12 Hz (alpha), 12–30 Hz (beta), and 30–180 Hz (gamma) (as measured with EEG) (Buzsáki, 2006; Engel, Gerloff, Hilgetag, & Nolte, 2013; Northoff, 2014a). Importantly, these different frequency ranges occur throughout the whole brain in various regions and networks, although there are some differences that result from the degree of spatial extension in the networks. Due to their longer phase durations, the infraslow frequency fluctuations are spatially more extended, that is, spread over more regions than the more localized higher frequency fluctuations such as gamma (Buzsáki, 2006; Northoff, 2014a).
Spontaneous activity in the DMN shows infraslow frequency fluctuations (0.001 to 0.1Hz) that are slower, stronger in their power, and more variable than in other networks such as SMN and CEN (Lee, Northoff, & Wu, 2014). This provides some reason to suspect that infraslow frequency fluctuations are specific to spontaneous activity, but this claim does not withstand empirical scrutiny. As demonstrated by Smith et al. (2009), infraslow frequency fluctuations in DMN are preserved and modulated during task-evoked activity as manifest in “background functional connectivity.”
In addition to the spatial features already discussed, functional connectivity also includes a strong temporal component in that it is calculated on the basis of the statistically based correlation, that is, synchronization of signal changes from different regions across different time points (see Fingelkurts et al. 2004a–c). The data by Smith et al. (2009) suggest that infraslow frequency fluctuations do not only occur in the spontaneous activity but also during stimulus-induced activity. Hence, infraslow frequency fluctuations overlap between spontaneous activity and stimulus-induced activity, which weakens the case for temporal segregation.
So far, I have demonstrated that empirical evidence speaks against the hypothesis of infraslow frequency fluctuations being involved in spontaneous activity but not in stimulus-induced activity. However, temporal segregation could still be viable if it were shown that high-frequency fluctuations such as gamma do not occur in the spontaneous activity but only during stimulus-induced activity. Once again however, this is not supported by empirical evidence. Even in the spontaneous activity, high-frequency fluctuations such as gamma can be observed (see Northoff, 2014a, for details).
To be sure, different regions and networks show different profiles or patterns in the relations between infraslow (0.01–0.1 Hz), slow (0.1–1 Hz) and fast (1–180 Hz) frequency fluctuations. The sensory regions such as the visual cortex may show rather strong higher frequency fluctuations (such as gamma), whereas their infraslow frequency fluctuations may not be as strong (Engel et al, 2013; Lee et al., 2014). This pattern is reversed in, for instance, the DMN where infraslow frequency fluctuations are rather strong and higher frequency ranges are relatively weak (Buzsáki, 2006). However, the case for parallelism as a model of the relation between spontaneous activity and stimulus-induced activity requires more than this. The temporal segregation hypothesis would require that certain forms of fluctuation are only present during spontaneous activity and others only present in stimulus-induced activity, which possibility has been disproven by the findings reviewed in this section.
However, the refutation of the temporal segregation hypothesis does not fully clinch the case against parallelism. Although it has been shown that spontaneous and stimulus-induced activity cannot be inferred to be independent on the basis of broad spatial or temporal features, it could still be the case that each form of neural activity has a kind of cerebral autonomy. The different frequency fluctuations may take place in multiple neural regions but still run in parallel in the sense that they do not influence one another.
The argumentative burden on this hypothesis is severe, however. It would be difficult to conclusively show that spontaneous and stimulus-induced activity have no influence on one another, especially considering that the two forms of neural activity overlap in both spatial and temporal ways. Thus, the final refutation of parallelism must await the vindication of its rival, interactionism. Fortunately, there is ample empirical evidence in support of interactionism.
Having discarded the spatial and temporal segregation hypotheses, our investigation of the relation between spontaneous and stimulus-induced activity must now explore the possibility that, despite their spatial and temporal overlap, these forms of neural activity are independent. If it can be shown that one of these is predictive of the other, or that one modulates the other (see chapter 1 for empirical support), the fate of parallelism will be sealed, and focus should be shifted to the nature and significance of their interaction.
This part of the investigation will be concerned with whether spontaneous activity and stimulus-induced activity are related to one another in an additive or nonadditive way. In a nutshell, additive interaction entails that stimulus-induced activity is merely added to the ongoing spontaneous activity without there being changes in either one that can be traced to the other. There would be nonadditive interaction, on the other hand, if it could be shown that features of the spontaneous activity are explanatory with respect to some features of stimulus-induced activity, or that there are features of stimulus-induced activity that explain changes in subsequent spontaneous activity. We will see that although there is some empirical evidence for additive interaction, the case for nonadditive interaction is stronger.
From the previous sections, we know that the only remaining way for parallelism to be considered viable as a model of the relation between spontaneous and stimulus-induced activity is for there to be only additive interaction between the two. This would require that, even though both recruit the same spatial and temporal features of neural activity, they nevertheless do not directly impact or modulate each other. Because spontaneous activity is ongoing in the brain and stimulus-induced activity occurs only when prompted by particular sensory episodes, the prospects for their interaction being merely additive can be illuminated by investigating whether the degree of stimulus-induced activity depends completely and exclusively on the stimulus alone. Unless this can be shown, parallelism must be discarded in favor of interactionism.
There have been studies on both cellular (Arieli, Sterkin, Grinvald, & Aertsen, 1996; Azouz & Gray, 1999) and regional (Becker, Reinacher, Freyer, Villringer, & Ritter, 2011; Fox et al., 2006) features of neural activity that have provided evidence for a stimulus-related signal being merely superimposed on ongoing spontaneous activity. For instance, Fox et al. (2006) showed that signal changes in motor cortex induced by a movement remained independent of the ongoing spontaneous activity in the very same region, the motor cortex. More specifically, the activity level in the motor cortex at stimulus onset, which signifies the spontaneous activity, did not exert any impact on subsequent stimulus-induced activity in the motor cortex. Hence, in this case the stimulus-induced activity seems to be added to or superimposed on top of the spontaneous activity independent of the amplitude of the latter (see left part in figures 2.1A and 2.1B).
Figure 2.1 Nonadditive interaction (A) at three different levels of resting state (or ongoing) activity (B).
Since the degree of stimulus-evoked activity in these studies is not disturbed by differences in the amount of spontaneous activity occurring at stimulus onset, the interaction between the two is additive. Thus, in some cases at least, spontaneous and stimulus-induced activities are processed independently.
A similar superposition of stimulus-induced activity on spontaneous activity was demonstrated by Engel et al. (2013). They showed that stimulus-induced activity can be simply added to spontaneous activity by elevating the power of high-frequency fluctuations such as gamma. Importantly, in these studies spontaneous gamma power did not predict stimulus-induced gamma power. Thus, there is reason to believe that stimulus-induced activity can run parallel to spontaneous activity, independently recruiting similar spatial (regions) and temporal (amplitude of frequency fluctuations) features of neural activity.
The studies reviewed in this section provide some hope for the weak interactionist or parallelist model, but they are far from decisive. As mentioned earlier, if we find instances of dependence, for example, interaction between spontaneous and stimulus-induced activity, that is enough to shed doubt on parallelism (in at least its strong version; see introduction of this chapter). The findings of Fox et al. (2005) and Engel et al. (2013) do not confirm this claim. They are just indications that sometimes stimulus-induced activity is merely superimposed on spontaneous activity. Parallelism (in at least its strong version) remains vulnerable to evidence for any nonadditive interactions between them. The next section reviews studies that provide this.
One measure often used to indicate stimulus-induced activity is trial-to-trial variability (TTV) which, roughly described, refers to the differences in amplitude of neural activity between different trials related to the repeated presentation of one and the same stimulus or task (Churchland et al., 2010). Importantly, TTV is measured in reference to the degree of variability at the onset of the stimulus or task, which reflects the variability of the spontaneous activity at the time of stimulus onset. This means that TTV is not a purely stimulus-related measure but one where the trial-based effects of the stimuli on variability are measured against the resting state’s level of ongoing variability.
Nor can TTV be regarded as mere noise that is related to technical artifacts rather than being physiological, that is, neural by itself: neural activity in the spontaneous activity continuously changes its levels as indexed by temporal variance (He, 2013). The incoming stimulus impinges on the spontaneous activity by reducing its ongoing temporal variance transiently, which we measure as TTV on both cellular and regional levels of neural activity (Churchland et al., 2010; He, 2013).
The use of TTV as a measure of stimulus-induced activity carries implications regarding the relation between spontaneous activity and stimulus-induced activity. The data from Fox et al. (2005) and Engel et al. (2013) reviewed above only addressed stimulus-induced activity in terms of its amplitude without considering TTV. When stimulus-induced activity is investigated in terms of TTV, it becomes more difficult to maintain that it fails to interact with spontaneous activity.
Many studies on cellular and regional features of neural activity have shown reduction in the degree of ongoing variability in neural activity related to repeated stimuli or tasks, that is, reduction in TTV (see Churchland et al., 2010; He, 2013; White, Abbot, & Fiser, 2012). Recently, Huang, Zhang, Longtin, et al. (2017) demonstrated that the degree of stimulus-related reduction in TTV depends on the level of spontaneous activity at stimulus onset: higher levels of spontaneous activity at stimulus onset lead to higher reduction in stimulus-induced TTV, whereas lower levels of spontaneous activity at stimulus onset have resulted in lower TTV reduction (see also Ponce-Alvarez et al., 2015, for confirmation from the side of computational modeling). This is strong evidence that the degree of TTV is dependent on the resting state, which speaks against parallelism and in favor of interactionism.
Huang, Zhang, Longtin, et al. (2017) also performed studies on the saturation effect, which refers to the maximum possible level of neural activity the brain can generate (in a particular region or network or the whole brain) independent of whether that activity is related to spontaneous activity or stimulus-induced activity. If, for instance, the level of spontaneous activity is already high by itself, it may be close to the saturation level and hence will not leave much room for additional increases in the level of neural activity due to stimulus-induced activity. The stimulus can then no longer induce the degree of activity it would if the spontaneous activity were further from the saturation point (see also Ponce-Alvarez et al., 2015, for supporting such neuronal claims on the basis of computational modeling).
Thus, the brain’s biophysical limitations on the degree of activity it can generate creates a link between spontaneous activity and stimulus-induced activity. Since there is a finite amount of neural activity that the brain can perform, the degree of spontaneous activity affects stimulus-induced activity by leaving more or less neural activity for a stimulus to induce. This is more evidence in favor of interactionism.
The saturation effect is just one way that spontaneous activity can have an impact on subsequent stimulus-induced activity. Another stream of research has shown that different levels of spontaneous activity can have considerable impact on subsequent stimulus-induced activity without the saturation effect being a factor (He, 2013; Hesselmann, Kell, Eger, et al., 2008, Hesselmann, Kell, & Kleinschmidt 2008; Huang, Zhang, Longtin, et al., 2017; Sadaghiani et al., 2009; Sadaghiani, Hesselmann, et al., 2010; see Northoff , Qin, Nakao, 2010, and Northoff, Duncan, & Hayes, 2010, for review). For instance, Hesselmann, Kell, Eger, et al. (2008) showed that when the level of prestimulus activity was low in the fusiform face area (FFA), a region that is strongly implicated in processing faces, subsequent stimulus-induced activity was rather high in the same region, and this even had clear behavioral consequences. Subjects with low spontaneous activity were more likely to subsequently see an ambivalent stimulus as a face (rather than a vase).
Analogous results were observed in the neural structures involved in other sensory modalities such as the auditory cortex (see Sadaghiani et al., 2009). This Sadaghiani et al. study showed that certain auditory tones could be detected only when the stimulus-induced activity was preceded by high amplitude levels of prestimulus spontaneous activity in the auditory cortex. Higher prestimulus spontaneous activity levels correlated with both higher subsequent stimulus-induced activity and subjects being more likely to detect the tones. In another study (Hesselmann et al., 2008), they showed that low prestimulus activity levels in fusiform face area lead to high poststimulus amplitude with high recognition of faces. Based on these findings, they assume nonadditive interaction between ongoing spontaneous activity and stimulus-induced activity (Sadaghiani, Hesselmann et al., 2010).
The likelihood of such nonadditive interaction was further bolstered by He (2013), who observed that both amplitude and TTV during stimulus-induced activity were inversely proportional to prestimulus levels of spontaneous activity. This means that lower levels of prestimulus activity predicted higher amplitudes and higher reduction in TTV during exposure to the stimulus.
That finding was further extended by Huang, Zhang, Longtin, et al. (2017) who showed that interaction between rest and stimuli-related neural activity is affected by the phase of ongoing infraslow frequency fluctuations. It was shown that if the ongoing infraslow frequency fluctuation finds itself in its positive phase (corresponding to low excitability in response to external stimuli), subsequent stimulus-related amplitude and TTV reduction will be low. Likewise, if the ongoing infraslow frequency fluctuation finds itself in its negative phase (corresponding to high excitability in response to external stimuli), subsequent stimulus-related amplitude and TTV reduction will be high. The degree of rest–stimulus interaction in a particular region or network is thus directly dependent on the phase of the prestimulus spontaneous activity at stimulus onset.
Taken together, these results suggest that stimulus-related phenomena such as amplitude and degree of TTV are directly dependent on the level of spontaneous activity at stimulus onset or prestimulus. These data speak in favor of nonadditive (rather than additive) interaction between spontaneous and stimulus-induced activity and thus form the beginnings of a positive case for strong interactionism (see middle and right parts in figure 2.1A).
Contrary to some of the studies explored earlier, stimulus-induced activity is not merely superimposed on spontaneous activity. Instead, it is clear that there is at least one direction of influence between them. The findings reviewed in this section establish that spontaneous activity is influential with respect to stimulus-induced activity in many ways. This is known as rest–stimulus interaction (Northoff et al., 2010), and although it would be sufficient for claiming that parallelism is flawed, there is still more to be said in support of an interactionist model. The next section shows that there is also the reversed relation, that is, stimulus–rest interaction in the brain.
So far, I have only explored one-half of interactionism, the influence of spontaneous activity on subsequent stimulus-induced activity, that is, the rest–stimulus interaction. The other half, the influence of stimulus-induced activity on subsequent spontaneous activity, that is, stimulus–rest interaction, remains to be addressed. The findings reviewed so far are compatible with there being nonadditive rest–stimulus interaction but only additive stimulus–rest interaction. This could support claims for a hybrid model in which interactionism characterizes the influence of spontaneous on stimulus-induced activity, whereas parallelism characterizes the influence of stimulus-induced on spontaneous activity.
If this were the case, spontaneous activity could be ascribed a degree of neural autonomy because its essential features would not change throughout stimulus-induced activity. This possibility is significant because it would mean that despite there being interaction between spontaneous and stimulus-induced activity, spontaneous activity could still serve as a reference against which stimulus-induced activity is defined. That would resolve the aforementioned controversy concerning the use of spontaneous activity as a baseline for demarcating task-evoked activity (see Klein, 2014; Morcom & Fletcher, 2007a,b).
However, empirical evidence speaks against such a scenario. Several studies have demonstrated that stimuli or tasks and their related stimulus-induced or task-evoked activities do have an impact on subsequent spontaneous activity (see Northoff, Qin, Nakao 2010, for a review). For instance, high self-related or personally relevant stimuli induced higher activity levels in the midline DMN regions associated with spontaneous activity during the subsequent period (the intertrial interval) when compared to low self-related or personally irrelevant stimuli (Schneider et al., 2008). Additionally, emotional stimulation and working memory have been observed to change the subsequent spontaneous activity in the amygdala after emotional stimuli and in the dorsolateral prefrontal cortex after working memory tasks (see Northoff, Duncan, Hayes, 2010, for review).
Taken together, these findings suggest that spontaneous activity is just as sensitive to preceding stimulus-induced activity as the latter is sensitive to the former. It can be concluded that the rest–stimulus interaction established in the previous section is complemented by stimulus–rest interaction and that both are nonadditive. Although many empirical details still need to be worked out, the evidence currently available strongly suggests that spontaneous and stimulus-induced activity are mutually dependent on each other in several ways. It is therefore reasonable to reject all forms of parallelism and embrace interactionism.
I so far have described different models of brain, parallelism versus interaction, with empirical evidence tilting the balance in favor of the latter. This leaves open, however, how such interaction, especially the nonadditive interaction, takes place. The explanation leads us to take a deeper look into the mechanisms that operate behind our observations. Specifically, this makes it necessary to investigate the brain’s coding strategy and the fundamental principles underlying the constitution and generation of its neural activity.
How is the nonadditive interaction between spontaneous activity and the stimulus possible? There must be direct interaction between spontaneous activity and stimulus since otherwise the two could not contribute in varying degrees to one and the same neural activity, that is, stimulus-induced activity. Additionally, both must be able to reciprocally modulate each other: a strong spontaneous activity might weaken the impact of the stimulus on stimulus-induced activity, whereas a strong stimulus would weaken the influence of ongoing spontaneous activity on ensuing stimulus-induced activity.
At a glance one may think that spontaneous activity and stimuli are too different to allow for the sort of direct interaction described above. The stimulus can be characterized by a particular event or object at a specific point in time and space entailing a small spatiotemporal range or scale. In contrast, the spatiotemporal scale of the spontaneous activity is much larger than that of typical stimuli ranging from the infraslow (0.01–1 Hz) to the ultrafast gamma (180 Hz) fluctuations. These differences in spatiotemporal range or scale between spontaneous and stimulus-induced activity pose a challenge for explaining how the two can directly interact with one another.
The interaction between spontaneous activity and stimuli can occur because the two share something like a common code or “common currency” that underlies their differences. One way to construct the needed bridge would be to code stimuli and spontaneous activity in direct relation to each other on the basis of their different statistical frequency distribution across time and space, that is, in terms of spatiotemporal structure. Spontaneous activity shows continuous change, which results in a certain statistical frequency distribution that I describe as “neuronal statistics” (Northoff, 2014a). The stimuli themselves follow and occur in a certain statistical frequency distribution, that is, their “natural statistics” (Barlow, 2001).
What exactly is meant by “natural statistics”? Rather than coding each stimulus by itself, Barlow suggests that the brain codes and represents “chunks of stimuli” and their details together. He calls the results of this process “gathered details” (Barlow, 2001, p. 603). Let us take the example of a complex scene with a breakfast table covered with various items of food and plates, and so on. In this case our glance first falls on the big teapot in the middle; then we wander to the bread basket, and from there to the cheese plate, the jams, and the various other plates. All items are located at different spatial positions on the table and are not perceived simultaneously by us—rather, we perceive them sequentially by letting our glance wander around the table and its various items.
If one were encoding each single stimulus by itself, one would not connect all items together and consider them to belong to one and the same table, the breakfast table. Moreover one would not render the connection that categorizes each as relevant for breakfast. Despite their spatial and temporal differences, the different stimuli (and hence the different items) must be encoded in conjunction. Once they are put together during encoding, they come to constitute what Barlow describes as “chunks of stimuli” and “gathered details.”
Yet another example is the perception of a melody. We do not hear any single tone in isolation but perceive the present tone in relation to the previous one and often make predictions about the next forthcoming tone. This is only possible if we encode the present tone in relation to the previous one thus putting both together as “chunks of tones” with “gathered details.” (See Northoff, 2014b, chs. 13–15.) This is only possible, according to Barlow, if our brain encodes the occurrence of the tones (and stimuli in general) in terms of their statistical occurrence in time and space. The closer temporally the tone follows the preceding one, the more likely both tones are encoded and processed together as “chunks of tones.” The same principle holds obviously for the spatial dimension: in the case of the breakfast table, the various items are spatially near to one another and are therefore highly likely to be encoded together as “chunks of stimuli.”
How can we specify the encoding strategy that results in gathered details? Let us start with what is not encoded into neural activity, since that will make it easier for us to better understand the brain’s actual encoding strategy. When perceiving a melody, for example, Barlow proposes that the sensory cortex does not encode each tone by itself. Instead of encoding single stimuli by themselves, the brain seems to encode the distribution of the stimulus.
Within a bird’s song, for example, the bird’s brain will encode the distribution of a particular tone across discrete points in physical time. And the brain may also encode the spatial position of the bird’s tone relative to, for instance, a nearby rustling of leaves. What is encoded into neural activity is thus the statistical frequency distribution of stimuli across different discrete points in physical time and space. This is what Barlow describes as the encoding of the stimuli’s “natural statistics,” the statistical frequency distribution of a stimulus across discrete positions in time and space.
Having described natural statistics, it is now imperative to clarify the nature of “neuronal statistics.” Externally generated events in the environment are encoded in terms of their statistical frequency distributions, or natural statistics, into the brain’s neural activity, the result of which is stimulus-induced activity. The same holds, analogously, for the brain’s spontaneous activity itself. Internally generated events within the brain are encoded in terms of their statistical frequency distributions, or neuronal statistics, the result of which is spontaneous activity.
This phenomenon has major implications. The encoding of the external stimuli’s natural statistics into neural activity is only possible through interaction with the neuronal statistics that characterize spontaneous activity. The interaction between external stimulus and spontaneous activity can consequently be sketched as an interaction between two different statistics, natural and neuronal.
Let us recount the interaction between stimulus and spontaneous activity and their respective statistics in more detail. The brain and its spontaneous activity’s neuronal statistics encode stimuli as statistical frequency distributions across different points in time and space. The resulting neural activity, the stimulus-induced activity, then reflects the statistically based differences between the spontaneous activity’s neuronal statistics and the stimuli’s natural statistics—this amounts to difference-based coding (see Northoff, 2014a, for empirical detail). Thus, statistically based differences provide the “common currency” between spontaneous activity’s neuronal statistics and stimuli’s natural statistics. This common currency, I contend, constitutes the relation that brains bear to the wider world in which they exist—this phenomenon amounts to what I describe in chapter 3 as the world–brain relation (see figures 2.2A and 2.2B).
Figure 2.2 Different models of neural coding. The figure depicts two different models of neural coding, difference-based coding (A) and stimulus-based coding (B). The upper part in each figure illustrates the occurrence of stimuli across time and space as indicated by the vertical lines. The lower part in each figure with the bars stands for the action potentials as elicited by the stimuli with the blue arrow describing the link between stimuli and neural activity. (A) In the case of difference-based coding, the stimuli and their respective temporal and spatial positions are compared, matched, and integrated with each other. In other terms, the differences between the different stimuli across space and time are computed as indicated by the dotted lines. The degree of difference between the different stimuli’s spatial and temporal positions does in turn determine the resulting neural activity. The different stimuli are thus dependent on each other when encoded into neural activity. Hence, there is no longer one-to-one matching between stimulus and neural activity. (B) This is different in the case of stimulus-based coding. Here each stimulus, including its respective discrete position in space and time, is encoded in the brain’s neural activity. Most importantly, in contrast to difference-based coding, each stimulus is encoded by itself independent of the respective other stimuli. This results in one-to-one matching between stimuli and neural activity.
We can now explain how difference-based coding makes the nonadditive interaction between spontaneous and stimulus-induced activity possible. Nonadditive interaction is possible only if the spontaneous activity can directly interact with the stimulus and impact the degree to which it elicits stimulus-induced activity in the brain. Different degrees of nonadditive interaction are mediated by different degrees of statistical–difference-based matching between the spontaneous activity’s neuronal statistics and the stimuli’s natural statistics. This means that the better their respective statistics match in their statistically based differences, the more strongly the spontaneous activity’s neuronal statistics can impact the stimulus and its natural statistics, and the higher the degree of nonadditive interaction.
Let us conceive a thought experiment. Imagine there were stimulus- rather than difference-based coding. In such a case, the stimulus would only be encoded in an isolated way, in its discrete point in time and space, remaining untethered by any statistically based relation to other stimuli or to the brain’s spontaneous activity. This would make any direct interaction (e.g., reciprocal modulation) between spontaneous activity and stimuli less likely. Stimulus-induced activity would supervene on the ongoing spontaneous activity in a merely additive way. In short, stimulus-based coding precludes nonadditive interaction. This suggests that difference-based coding may be what underlies nonadditive interaction.
How would such stimulus-based coding affect the person’s perception and cognition of external events in the environment? Temporally separate stimuli could no longer be integrated and linked. For instance, one starts looking at the eyes in the face of a person and then continues to the nose and the mouth. Difference-based coding allows for encoding the statistically based temporal differences among eyes, nose, and mouth as incidences of natural statistics, which makes possible their integration and relation as is present when we perceive them as part of one face.
In the previous section I argued that the interaction model requires a particular coding strategy, namely, difference-based coding. There is still more to say about the relation between the interaction model and difference-based coding. To clarify the nature of this relation, we may look into the philosopher of science R. Giere’s thoughts on the determination of models and fundamental principles.
What are models?
Models posit particular relations between different events or features we observe. The human sensory system is far better at observing certain events or features than it is at clarifying the relation that holds between things perceived. For understanding and capturing the relation between the different observed events or features, we construct models. These models can then, in turn, be tested experimentally.
Let us consider the relation between the Earth and sun as a paradigmatic example. The Ptolemaic geocentric model took the Earth as the center of the universe around which the sun revolves. Copernicus, with the Copernican revolution (see chapter 15 for the application of that revolution to philosophy), reversed that relation and suggested a different model, a heliocentric model: now the earth revolves around the sun, which is the center of the universe. Slowly, investigators elaborated ever more precise ways to test the relative empirical plausibility of both models. As we all know, the brilliant scientific observations of both Galileo and Newton shifted the pendulum toward the Copernican model.
Let us apply this to our interaction model. The interaction model establishes a relation between spontaneous activity and stimuli and addresses the question of how they interact with each other. As discussed above, the interaction model is supported by empirical investigation. For instance, investigators have directly compared additive and nonadditive interaction models by checking to see which model made better predictions about the prestimulus amplitude and phase dependence of stimulus-induced activity (He, 2013; Huang, Zhang, Longtin, et al. 2017).
A close relation to observed reality distinguishes models from fundamental principles. Following Giere (1999, 2004, 2008a, 2008b), fundamental principles refer to “abstract entities or objects” that structure and provide templates for subsequent development of models that target more concrete and specific features. Importantly, unlike laws, principles do not result from empirical universalization, nor can they be traced to (or subsumed under) some logicolinguistic structure or formalism (which distinguishes them from the propositions of logic and mathematics). Instead, principles must be conceived as constructions developed by the scientist to explain her or his models and data. Taken in this way, principles may be regarded as “vehicles for making empirical claims” (Giere, 2004, p. 745).
Fundamental principles are highly abstract in that they are far removed from any specific aspect or feature in the world itself (Cartwright and Giere also distinguish between fundamental and bridge principles; see below). Examples of fundamental principles include, for instance, principles of mechanics (Newton), principle of electromagnetism (Maxwell), principle of relativity (Einstein), principle of uncertainty and quantum mechanism (Bohr, Heisenberg), principle of thermodynamics (Prigogine), principle of natural selection (Darwin), the principles of genetics (Mendel), etc. (Giere, 1999, p. 7, 2004, pp. 744–745). These are fundamental principles that guide our scientific investigation of the world and its nature in physics, chemistry, and biology. Each fundamental principle is posited to make sense of observations; the principles thus remain abstract insofar as they are distinct from the observations themselves.
As indicated above, Giere (1999, 2004, 2008a) characterizes principles by their (1) abstract objects or entities and (2) high degree of abstraction with no direct physical realization and no specific values of the supposed variables (like “drawings of an architect that were never built”) (Giere, 2004, p. 745, 2008a, p. 5). Both criteria are met by difference-based coding. Difference-based coding makes ontological commitment to abstract objects, namely statistically based differences. These underlie the events or objects we perceive, but we do not directly perceive them. Statistically based differences can thus be compared to the force of gravity that we do not observe as such, but which is inferred from the effects we do observe.
The same holds of the differences implicated in difference-based coding. We can only observe stimuli, single and isolated stimuli separated from each other in terms of their location or position in time and space. In contrast, we do not directly perceive the statistically based differences between the different stimuli and their temporal and spatial differences. On a more general level this amounts to an inability to perceive the statistically based differences that constitute spatiotemporal relations between different stimuli, that is, their natural statistics. Our perceptual inability has an empirical analogue in neuroscientists’ inability to link and relate spontaneous and stimulus-induced activity through direct observation of neural activity.
We are here focusing on the first inability, our principal inability to perceive the statistically based spatiotemporal differences between different stimuli. I postulate that this inability causes major reverberations regarding how to conceive difference-based coding. The best we can do is to indirectly perceive and grasp the statistically based differences by, for instance, using computational modeling (and mathematical formalization). Difference-based coding thus refers to an abstraction, the process of comparing and matching different difference-based statistical frequency distributions between spontaneous activity’s neuronal statistics and stimuli’s natural statistics.
There is no direct physical realization of differences; they are just statistical relations, which compare well to other examples of fundamental principles provided by Giere, such as numerical relations, geometrical figures, or square roots (Giere, 2008a, p. 5). Moreover, the differences are spatiotemporal. The differences are temporal: at various points in time they could refer to statistical differences between the occurrence of dynamic changes in spontaneous activity or stimuli. Additionally, the differences are also spatial: they refer to statistical differences between the occurrence of stimuli on the one hand and spontaneous activity’s dynamic changes on the other at different points in space.
Accordingly, difference-based coding is intrinsically spatiotemporal: it consists of the detection of spatiotemporal differences across events and objects through their influence on ongoing spatiotemporal differences in the brain’s spontaneous activity. Since we cannot directly access or observe statistically based spatiotemporal differences, the concept of difference presupposed therein is highly abstract and thus an ideal candidate for a fundamental principle (in the sense of Giere).
What is the role and function of fundamental principles? Giere argues that fundamental principles serve as a “general template” for organizing and structuring the features or aspects in a model including their relations (Giere, 2004, p. 745, 2008a, p. 5). Difference-based coding can serve to structure and organize models of the brain’s neural activity such as the interaction model. I have made a case for the claim that difference-based coding provides insight into the neuronal mechanisms underlying the nonadditive interaction between spontaneous activity and stimuli.
Essentially the concept is that this nonadditive interaction is the result of statistically based spatiotemporal differences between the spontaneous activity’s neuronal statistics and the stimuli’s natural statistics. This analysis is not a generalization from empirical data but, rather, an attempt to infer how the relevant empirical data could accumulate in support of the interaction model. As Giere might describe it, difference-based coding results from tracing our model of the data, the interaction model, to some underlying principle that can serve as general umbrella and “vehicle for making empirical claims” (Giere, 2004, p. 745).
Despite the case I have presented, it could still be that difference-based coding is not a fundamental principle of the brain’s neural activity. To be sure, one would need to demonstrate that all neural activity in the brain including nonadditive interaction is constituted by difference-based coding. Moreover, I would need to demonstrate on either empirical or theoretical grounds that without difference-based coding there would be no nonadditive interaction at all, indeed, in the most extreme of all cases, no neural activity at all. Only if all this were shown to be the case would it be assured that difference-based coding is a fundamental principle of the brain’s neural activity.
Future investigation may demonstrate that difference-based coding underlies all forms of stimulus-induced activity and spontaneous activity. That would lend further empirical support to the supposition that difference-based coding really underlies neural activity in general and can therefore be conceived as a fundamental principle of the brain’s neural activity (see chapter 4 and especially Northoff, 2014a, for additional support in this direction).
I have discussed different models of the relation between the brain’s spontaneous and stimulus-induced activity. I distinguished between parallelist and interactionist accounts and considered empirical evidence for and against each. On several points, the evidence is clear. First, I have shown that despite the appeal of the standard view that localizes spontaneous activity entirely within the slower fluctuations that occur in the DMN’s neural activity, spontaneous and stimulus-induced activity are neither spatially nor temporally segregated from one another. Second, it was shown that spontaneous and stimulus-induced activity do have effects on one another and that these transpire in a nonadditive fashion.
Third, venturing into the philosophy of science, I opted for difference-based coding as a fundamental principle (or bridge principle) to underlie the interaction model. Although this is sufficient to accept interactionism and reject parallelism, much more empirical investigation is required to illuminate the ways that these forms of neural activity interact. There are also important conceptual issues that the foregoing argument barely touches on.
For example, it is not clear what the implications of the empirical evidence being on the side of interactionism are for a proposal such as the one advanced by Klein. Klein (2014) argues that spontaneous and stimulus-induced activity may involve very different time scales. He argues that spontaneous activity may cover a much larger or long-term time scale than stimulus-induced activity. If Klein is right, one could conceive of the interactionist model of spontaneous and stimulus-induced activity in temporal terms. This raises the possibility that the nonadditive interaction between them may serve the purpose of integrating the information contained in short-term stimulus-induced activity into the longer-term spontaneous activity. This is an empirically tractable possibility.
Further investigation could, for example, investigate whether the nonadditive interaction between spontaneous and stimulus-induced activity is related to the coupling of long-term infraslow and short-term high-frequency fluctuations, that is, cross-frequency coupling. By coupling different frequencies, the spontaneous activity constructs a certain temporal structure, a sort of grid of temporal continuities in neural activity across different time scales, that is, in the different coupled frequencies. This temporal structure may be central for processing stimuli and providing the kind of nonadditive interaction effects discussed in the first two sections of this chapter. However, the exact relation between cross-frequency coupling and rest–stimulus interaction remains to be explored.
Future investigation may also reveal whether the integration of information contained in different temporal scales has either behavioral or phenomenal significance. As indicated in the first two sections of this chapter, the nonadditive interaction may strengthen stimulus-induced activity which in turn may make it more likely that we can detect the respective stimulus. Furthermore, the integration of different time scales, that is, long- and short-term may be particularly relevant for subjective consciousness wherein we experience fleeting short-term contents that appear to depend on a contrast with a relatively stable long-term background in order to reach awareness.
One would consequently expect the degree of nonadditive rest–stimulus interaction to be directly proportional to the degree of consciousness associated with that respective stimulus and its contents (see Northoff, 2014b). Thus, although it is important to have set the record straight regarding the problems of parallelism and the promise of interactionism, the fact that empirical evidence comes down on the side of interactionism ought to be seen as just a small early step toward understanding how the brain’s spontaneous and stimulus-induced activities conspire to manifest human mindedness.
Finally, the interaction model of brain raises questions of underlying fundamental principle. Based on empirical evidence, I proposed that a particular coding strategy, namely difference-based coding, may be an abstract fundamental principle. Difference-based coding is an empirically plausible statistically based coding strategy that allows for direct interaction between the spontaneous activity’s neuronal statistics and the stimuli’s natural statistics.