In the previous chapter, I discussed the problems of learning and self-propagation as they apply both to machines and, at least by analogy, to living systems. Here I shall repeat certain comments I made in the Preface and which I intend to put to immediate use. As I have pointed out, these two phenomena are closely related to each other, for the first is the basis for the adaptation of the individual to its environment by means of experience, which is what we may call ontogenetic learning, while the second, as it furnishes the material on which variation and natural selection may operate, is the basis of phylogenetic learning. As I have already mentioned, the mammals, in particular man, do a large part of their adjustment to their environment by ontogenetic learning, whereas the birds, with their highly varied patterns of behavior which are not learned in the life of the individual, have devoted themselves much more to phylogenetic learning.
We have seen the importance of non-linear feedbacks in the origination of both processes. The present chapter is devoted to the study of a specific self-organizing system in which non-linear phenomena play a large part. What I here describe is what I believe to be happening in the self-organization of electroencephalograms or brain waves.
Before we can discuss this matter intelligently, I must say something of what brain waves are and how their structure may be subjected to precise mathematical treatment. It has been known for many years that activity of the nervous system is accompanied by certain electrical potentials. The first observations in this field go back to the beginning of the last century and were made by Volta and Galvani in neuromuscular preparations from the leg of the frog. This was the birth of the science of electrophysiology. This science, however, advanced rather slowly until the end of the first quarter of the present century.
It is well worth reflecting why the development of this branch of physiology was so slow. The original apparatus used for the study of physiological electric potential consisted of galvanometers. These had two weaknesses. The first was that the entire energy involved in moving the coil or needle of the galvanometer came from the nerve itself and was excessively minute. The second difficulty was that the galvanometer of those times was an instrument whose mobile parts had quite appreciable inertia, and a very definite restoring force was necessary to bring the needle to a well-defined position; that is, in the nature of the case, the galvanometer was not only a recording instrument but a distorting instrument. The best of the early physiological galvanometers was the string galvanometer of Einthoven, where the moving parts were reduced to a single wire. Excellent as this instrument was by the standards of its own time, it was not good enough to record small electrical potentials without heavy distortions.
Thus electrophysiology had to wait for a new technique. This technique was that of electronics, and took two forms. One of these was based on Edison’s discovery of certain phenomena pertaining to the conduction of gases, and from these arose the use of the vacuum tube or electric valve for amplification. This made it possible to give a reasonably faithful transformation of weak potentials into strong potentials. And so it permitted us to move the final elements of the recording device by the use of energy not emanating from the nerve but controlled by it.
The second invention also involved the conduction of electricity in vacuo, and is known as the cathode-ray oscillograph. This made it possible to use as the moving part of the instrument a much lighter armature than that of any previous galvanometer, namely, a stream of electrons. With the aid of these two devices, separately or together, the physiologists of this century have been able to follow faithfully the time course of small potentials which would have been completely beyond the range of accurate instrumentation possible in the nineteenth century.
With these means, we have been able to obtain accurate records of the time course of the minute potentials arising between two electrodes placed on the scalp or implanted in the brain. While these potentials had already been observed in the nineteenth century, the availability of the new accurate records excited great hopes among the physiologists of twenty or thirty years ago. As to the possibilities of using the devices for the direct study of brain activity, leaders in this field were Berger in Germany, Adrian and Matthews in England, and Jasper, Davis, and the Gibbs (husband and wife) in the United States.
It must be admitted that the later development of electroencephalography has up to now been unable to fulfill the rosy hopes entertained by the early workers in the field. The data which they obtained were recorded by an ink-writer. They are very complicated and irregular curves; and although it was possible to discern certain predominating frequencies, such as the alpha rhythm of about 10 oscillations per second, the ink record was not in a suitable form for further mathematical manipulation. The result is that electroencephalography became more an art than a science, and depended on the ability of the trained observer to recognize certain properties of the ink record on the basis of a large experience. This had the very fundamental objection of making the interpretation of the electroencephalograms a largely subjective matter.
In the late twenties and the early thirties, I had become interested in the harmonic analysis of continuing processes. While the physicists had previously considered such processes, the mathematics of harmonic analysis had been almost confined to the study of either periodic processes or those which in some sense tended to zero as the time became large, positively or negatively. My work was the earliest attempt to put the harmonic analysis of continuing processes on a firm mathematical basis. In this, I found that the fundamental notion was that of autocorrelation, which had already been used by G. I. Taylor (now Sir Geoffrey Taylor) in the study of turbulences.1
This autocorrelation for a time function f(t) is represented by the time-mean of the product f(t + τ) by f(t). It is advantageous to introduce complex functions of the time even though in the actual cases studied we are dealing with real functions. And now the auto-correlation becomes the mean of the product of f(t + τ) with the conjugate of f(t). Whether we are working with real or complex functions, the power spectrum of f(t) is given by the Fourier transform of the autocorrelation.
I have already spoken of the lack of suitability of ink records for further mathematical manipulations. Before much could come of the idea of autocorrelation, it was necessary to replace these ink records by other records better adapted to instrumentation.
One of the best ways of recording small fluctuating electric potentials for further manipulation is the use of magnetic tape. This allows the storage of the fluctuating electric potential in a permanent form which can be used later whenever convenient. One such instrument was devised about a decade ago in the Research Laboratory of Electronics of the Massachusetts Institute of Technology, under the guidance of Professor Walter A. Rosenblith and Dr. Mary A. B. Brazier.2
In this apparatus, magnetic tape is used in its frequency-modulation form. The reason for this is that the reading of magnetic tape always involves a certain amount of erasure. With amplitude-modulation tape, this erasure gives rise to a change in the message carried, so that in successive readings of the tape we are actually following a changing message.
In frequency modulation there is also a certain amount of erasure, but the instruments by which we read the tape are relatively insensitive to amplitude, and read frequency only. Until the tape is so badly erased that it is completely unreadable, the partial erasure of the tape does not distort appreciably the message which it carries. The result is that the tape can be read very many times with substantially the same accuracy with which it was first read.
As will be seen from the nature of the autocorrelation, one of the tools which we need is a mechanism which will delay the reading of tape by an adjustable amount. If a length of the magnetic tape record having time-duration A is played on an apparatus having two playback heads, one following the other, two signals are generated which are the same except for a relative displacement in time. The time displacement depends on the distance between the playback heads and on the tape speed, and can be varied at will. We can call one of these f(t) and the other f(t + τ), where τ is the time displacement. The product of the two can be formed, for example, by using square-law rectifiers and linear mixers, and taking advantage of the identity
The product can be averaged approximately by integrating with a resistor-capacitor network having a time constant long compared with the duration A of the sample. The resulting average is proportional to the value of the autocorrelation function for delay τ. Repetition of the process for various values of τ yields a set of values of the autocorrelation (or rather, the sampled autocorrelation over a large time base A). The accompanying graph, Fig. 9, shows a plot of an actual autocorrelation of this sort.3 Let us note that we have shown only half the curve, for the autocorrelation for negative times would be the same as that for positive times, at least if the curve of which we are taking the autocorrelation is real.
Note that similar autocorrelation curves have been used for many years in optics, and that the instrument by which they have been obtained is the Michelson interferometer, Fig. 10. By a system of mirrors and lenses, the Michelson interferometer divides a beam of light into two parts which are sent on paths of different length and then reunited into one beam. Different path lengths result in different time delays, and the resultant beam is the sum of two replicas of the incoming beam, which may once more be termed f(t) and j(t + τ). When the beam intensity is measured with a power-sensitive photo-meter, the reading of the photometer is proportional to the square of f(t) + f(t + τ), and hence contains a term proportional to the autocorrelation. In other words, the intensity of the interferometer fringes (except for a linear transformation) will give us the autocorrelation.
All of this was implicit in Michelson’s work. It will be seen that, by carrying out a Fourier transformation on the fringes, the interferometer yields us the power spectrum of the light, and is in fact a spectrometer. It is indeed the most accurate type of spectrometer known to us.
This type of spectrometer has only come into its own in recent years. I am told that it is now accepted as an important tool for precision measurements. The significance of this is that the techniques which I shall now present for the working up of autocorrelation records are equally applicable in spectroscopy and offer methods of pushing to the limit the information which can be yielded by a spectrometer.
Let us discuss the technique of obtaining the spectrum of a brain wave from an autocorrelation. Let C(t) be an autocorrelation of f(t). Then C(t) can be written in the form
(10.02)
Here F is always an increasing or at least a non-decreasing function of w, and we shall term it the integrated spectrum of f. In general, this integrated spectrum is made in three parts, combined additively. The line part of the spectrum increases only at a denumerable set of points. Take this away, and we are left with a continuous spectrum. This continuous spectrum itself is the sum of two parts, one of which increases only over a set of measure zero, while the other part is absolutely continuous and is the integral of a positive integrable function.
From now on let us suppose that the first two parts of the spectrum—the discrete part and the continuous part which increases over a set of measure zero—are missing. In this case, we can write
(10.03)
where ϕ(ω) is the spectral density. If ϕ(ω) is of Lebesgue class L2, we can write
(10.04)
As will be seen by looking at the autocorrelation of the brain waves, the predominating part of the power of the spectrum is in the neighborhood of 10 cycles. In such a case, ϕ(ω) will have a shape similar to the following diagram.
The two peaks near 10 and −10 are mirror images of each other. The ways of performing a Fourier analysis numerically are various, including the use of integrating instruments and numerical computing processes. In both cases, it is an inconvenience to the work that the principal peaks are near 10 and −10 and not near 0. However, there are modes of transferring the harmonic analysis to the neighborhood of zero frequency which greatly cut down the work to be performed. Notice that
(10.05)
In other words, if we multiply C(t) by e20πit, our new harmonic analysis will give us a band in the neighborhood of zero frequency and another band in the neighborhood of frequency + 20. If we then perform such a multiplication and remove the + 20 band by averaging methods equivalent to the use of a wave filter, we shall have reduced our harmonic analysis to one in the neighborhood of zero frequency.
Now
(10.06)
Therefore, the real and imaginary parts of the C(t)·e20πit are given, respectively, by C(t) cos 20πt and iC(t) sin 20πt. The removal of the frequencies in the neighborhood of + 20 can be performed by putting these two functions through a low-pass filter, which is equivalent to averaging them over an interval of a twentieth of a second or greater.
Suppose that we have a curve where most of the power is nearly at a frequency of 10 cycles. When we multiply this by the cosine or sine of 20πt, we shall get a curve which is the sum of two parts, one of them behaving locally like this:
and the other like this:
When we average the second curve over the time for a length of a tenth of a second, we get zero. When we average the first one, we get half of the maximum height. The result is that, by the smoothing of C(t) cos 20πt and iC(t) sin 20πt, we get, respectively, good approximations to the real and imaginary part of a function having all of its frequencies in the neighborhood of zero, and this function will have the distributional frequency around zero that one part of the spectrum of C(t) has around 10. Now let K1(t) be the result of smoothing C(t) cos 20πt and K2(t) the result of smoothing C(t) sin 20πt. We wish to obtain
(10.07)
This expression must be real, since it is a spectrum. Therefore, it will equal
(10.08)
In other words, if we make a cosine analysis of K1 and a sine analysis of K2, and add them together, we shall have the displaced spectrum of f. It can be shown that K1 will be even and K2 will be odd. This means that if we do a cosine analysis of K1 and add or subtract the sine analysis of K2, we shall obtain, respectively, the spectrum to the right and to the left of the central frequency at the distance ω. This method for obtaining the spectrum we shall describe as the method of heterodyning.
In the case of autocorrelations which are locally nearly sinusoidal of period, say 0.1 (such as that which appears in the brain-wave autocorrelation of Fig. 9), the computation involved in this method of heterodyning may be simplified. We take our autocorrelation at intervals of a fortieth of a second. We then take the sequence at 0, 1/20 second, 2/20 second, 3/20 second, and so on, and change the sign of those fractions with odd numerators. We average these consecutively for a suitable length of run and get a quantity nearly equal to K1(t). If we work similarly with the values at 1/40 second, 3/40 second, 5/40 second, and so on, changing the sign of alternate quantities, and perform the same averaging process as before, we get an approximation to K2(t). From this stage on the procedure is clear.
The justification for this procedure is that the distribution of mass which is
while it is zero elsewhere, when it is subject to a harmonic analysis, will contain a cosine component of frequency 1 and no sine component. Similarly, a distribution of mass which is
and
will contain the sine component of frequency 1 and no cosine component. Both distributions will also contain components of frequencies N; but since the original curve which we are analyzing is wanting or nearly wanting at these frequencies, these terms will produce no effect. This greatly simplifies our heterodyning, because the only factors which we have to multiply by are + 1 or − 1.
We have found this method of heterodyning very useful in the harmonic analysis of brain waves when we have only manual means at our disposal, and when the bulk of the work becomes overwhelming if we carry through all the details of the harmonic analysis without the use of heterodyning. All of our earlier work with the harmonic analysis of brain spectra has been done by the heterodyning method. Since, however, it later proved possible to obtain the use of a digital computer for which reducing the bulk of the work is not such a serious consideration, much of our later work in harmonic analysis has been done directly without the use of heterodyning. There will still be much work to be done in places where digital computers are not available, so that I do not consider the heterodyning method obsolete in practice.
I am presenting here portions of a specific autocorrelation which we have obtained in our work. Since the autocorrelation covers a great length of data, it is not suitable for reproducing as a whole here, and we give merely the beginning, in the neighborhood τ = 0, and a portion of it further out.
Figure 11 represents the results of a harmonic analysis of the autocorrelation of which part is exhibited in Fig. 9. In this case, our result was obtained with a high-speed digital computer,4 but we have found a very good concordance between this spectrum and the one we obtained earlier through heterodyning methods by hand, at least in the neighborhood of the strong part of the spectrum.
When we inspect the curve, we find a remarkable drop in power in the neighborhood of frequency 9.05 cycles per second. The point at which the spectrum substantially fades out is very sharp and gives an objective quantity which can be verified with much greater accuracy than any quantity so far occurring in electroencephalography. There is a certain amount of indication that in other curves which we have obtained, but which are of somewhat questionable reliability in their details, this sudden fall-off in power is followed quite shortly by a sudden rise, so that between them we have a dip in the curve. Whether this be the case or not, there is a strong suggestion that the power in the peak corresponds to a pulling of the power away from the region where the curve is low.
In the spectrum which we have obtained, it is worth noting that the overwhelming part of the peak lies within a range of about a third of a cycle. An interesting thing is that with another electroencephalogram of the same subject recorded four days later, this approximate width of the peak is retained and there is more than a suggestion that the form is retained in some detail. There is also reason to believe that with other subjects the width of the peak will be different and perhaps narrower. A thoroughly satisfactory verification of this waits for investigations yet to be made.
It is highly desirable that the sort of work which we have mentioned in these suggestions be followed up by more accurate instrumental work with better instruments so that the suggestions which we here make can be definitely verified or definitely rejected.
I now wish to take up the sampling problem. For this I shall have to introduce some ideas from my previous work on integration in function space.5 With the aid of this tool, we shall be able to construct a statistical model of a continuing process with a given spectrum. While this model is not an exact replica of the process that generates brain waves, it is near enough to it to yield statistically significant information concerning the root-mean-square error to be expected in brain-wave spectra such as the one already presented in this chapter.
I here state without proof some properties of a certain real function x(t, α) already stated in my paper on generalized harmonic analysis and elsewhere.1 The real function x(t, α) is dependent on a variable t running from −∞ to ∞ and a variable α running from 0 to 1. It represents one space variable of a Brownian motion dependent on the time t and the parameter α of a statistical distribution. The expression
(10.09)
is defined for all functions ϕ(t) of Lebesgue class L2 from −∞ to ∞. If ϕ(t) has a derivative belonging to L2, Expression 10.09 is defined as
(10.10)
and is then defined for all functions ϕ(t) belonging to L2 by a certain well-defined limit process. Other integrals
are defined in a similar manner. The fundamental theorem of which we make use is that
(10.12)
is obtained by putting
(10.13)
where the τk are formed in all possible ways by identifying all pairs of the σk with each other (if n is even), and forming
(10.14)
If n is odd,
(10.15)
Another important theorem concerning these stochastic integrals is that if is a functional of g(t), such that
is a function belonging to L in α and depending only on the differences x(t2, α) − x(t1, α), then for each t1 for almost all values of α
(10.16)
This is the ergodic theorem of Birkhoff, and has been proved by the author6 and others.
It has been established in the Acta Mathematica paper already mentioned that if U is a real unitary transformation of the function K(t),
(10.17)
where β differs from a only by a measure-preserving transformation of the interval (0, 1) into itself.
Now let K(t) belong to L2, and let
(10.18)
in the Plancherel7 sense. Let us examine the real function
(10.19)
which represents the response of a linear transducer to a Brownian input. This will have the autocorrelation
(10.20)
and this, by the ergodic theorem, will have for almost all values of α the value
(10.21)
The spectrum will then almost always be
(10.22)
This is, however, the true spectrum. The sampled autocorrelation over the averaging time A (in our case 2700 seconds) will be
(10.23)
The resulting sampled spectrum will almost always have the time average
(10.24)
That is, the sampled spectrum and the true spectrum will have the same time-average value.
For many purposes, we are interested in the approximate spectrum, in which the integration of τ is carried out only over (0, B), where B is 20 seconds in the particular case we have already exhibited. Let us remember that f(t) is real, and that the autocorrelation is a symmetrical function. Therefore, we can replace integration from 0 to B by integration from −B to B:
(10.25)
This will have as its mean
(10.26)
The square of the approximate spectrum taken over (−B, B) will
which will have as its mean
It is well known that, if m is used to express a mean,
(10.28)
Hence the root-mean-square error of the approximate sampled spectrum will be equal to
(10.29)
Now,
(10.30)
Thus
is 1/A multiplied by a running weighted average of g(ω). In case the quantity averaged is nearly constant over the small range 1/A, which is here a reasonable assumption, we shall obtain as an approximate dominant of the root-mean-square error at any point of the spectrum
(10.32)
Let us notice that if the approximate sampled spectrum has its maximum at u = 10, its value there will be
(10.33)
which for smooth q(ω) will not be far from |q(10)|2. The root-mean-square error of the spectrum referred to this as a unit of measurement will be
(10.34)
and hence no greater than
(10.35)
In the case we have considered, this will be
(10.36)
If we assume then that the dip phenomenon is real, or even that the sudden fall-off which takes place in our curve at a frequency of about 9.05 cycles per second is real, it is worthwhile asking several physiological questions concerning it. The three chief questions concern the physiological function of these phenomena which we have observed, the physiological mechanism by which they are produced, and the possible application which can be made of these observations in medicine.
Note that a sharp frequency line is equivalent to an accurate clock. As the brain is in some sense a control and computation apparatus, it is natural to ask whether other forms of control and computation apparatus use clocks. In fact most of them do. Clocks are employed in such apparatus for the purpose of gating. All such apparatus must combine a large number of impulses into single impulses. If these impulses are carried by merely switching the circuit on or off, the timing of the impulses is of small importance and no gating is needed. However, the consequence of this method of carrying impulses is that an entire circuit is occupied until such time as the message is turned off; and this involves putting a large part of the apparatus out of action for an indefinite period. It is thus desirable in a computing or control apparatus that the messages be carried by a combined on-and-off signal. This immediately releases the apparatus for further use. In order for this to take place, the messages must be stored so that they can be released simultaneously, and combined while they are still on the machine. For this a gating is needed, and this gating can be conveniently carried out by the use of a clock.
It is well known that, at least in the case of the longer nerve fibers, nerve impulses are carried by peaks whose form is independent of the manner in which they are produced. The combination of these peaks is a function of the synaptic mechanism. In these synapses, a number of incoming fibers are linked to an outgoing fiber. When the proper combination of incoming fibers fires within a very short interval of time, the outgoing fiber fires. In this combination, the effect of the incoming fibers in certain cases is additive, so that if more than a certain number fire, a threshold is reached which permits the outgoing fiber to fire. In other cases some of the incoming fibers have an inhibitory action, absolutely preventing the firing, or at any rate increasing the threshold for the other fibers. In either case, a short combination period is essential, and if the incoming messages do not lie within this short period, they do not combine. It is therefore necessary to have some sort of gating mechanism to permit the incoming messages to arrive substantially simultaneously. Otherwise the synapse will fail to act as a combining mechanism.8
It is desirable, however, to have further evidence that this gating actually takes place. Here some work of Professor Donald B. Lindsley of the psychology department of the University of California at Los Angeles is relevant. He has made a study of reaction times for visual signals. As is well known, when a visual signal arrives, the muscular activity which it stimulates does not occur at once, but after a certain delay. Professor Lindsley has shown that this delay is not constant, but seems to consist of three parts. One of these parts is of constant length, whereas the other two appear to be uniformly distributed over about 1/10 second. It is as if the central nervous system could pick up incoming impulses only every 1/10 second, and as if the outgoing impulses to the muscles could arrive from the central nervous system only every 1/10 second. This is experimental evidence of a gating; and the association of this gating with 1/10 second, which is the approximate period of the central alpha rhythm of the brain, is very probably not fortuitous.
So much for the function of the central alpha rhythm. Now the question arises concerning the mechanism producing this rhythm. Here we must bring up the fact that the alpha rhythm can be driven by flicker. If a light is flickered into the eye at intervals with a period near 1/10 second, the alpha rhythm of the brain is modified until it has a strong component of the same period as the flicker. Unquestionably the flicker produces an electrical flicker in the retina, and almost certainly in the central nervous system.
There is, however, some direct evidence that a purely electrical flicker may produce an effect similar to that of the visual flicker. This experiment has been carried out in Germany. A room was made with a conducting floor and an insulated conducting metal plate suspended from the ceiling. Subjects were placed in this room, and the floor and the ceiling were connected to a generator producing an alternating electrical potential which may have been at a frequency near 10 cycles per second. The experienced effect on the subjects was very disturbing, in much the same manner as the effect of a similar flicker is disturbing.
It will, of course, be necessary for these experiments to be repeated under more controlled conditions, and for the simultaneous electroencephalogram of the subjects to be taken. However, as far as the experiments go, there is an indication that the same effect as that of the visual flicker may be generated by an electrical flicker produced by electrostatic induction.
It is important to observe that if the frequency of an oscillator can be changed by impulses of a different frequency, the mechanism must be non-linear. A linear mechanism acting on an oscillation of a given frequency can produce only oscillation of the same frequency, generally with some change of phase and amplitude. This is not true for non-linear mechanisms, which may produce oscillations of frequencies which are the sum and differences of different orders, of the frequency of the oscillator and the frequency of the imposed disturbance. It is quite possible for such a mechanism to displace a frequency; and in the case which we have considered, this displacement will be of the nature of an attraction. It is not too improbable that this attraction will be a long-time or secular phenomenon, and that for short times this system will remain approximately linear.
Consider the possibility that the brain contains a number of oscillators of frequencies of nearly 10 per second, and that within limitations these frequencies can be attracted to one another. Under such circumstances, the frequencies are likely to be pulled together into one or more little clumps, at least in certain regions of the spectrum. The frequencies that are pulled into these clumps will have to be pulled away from somewhere, thus causing gaps in the spectrum, where the power is lower than that which we should otherwise expect. That such a phenomenon may actually take place in the generation of brain waves for the individual whose autocorrelation is shown in Fig. 9 is suggested by the sharp drop in the power for frequencies above 9.0 cycles per second. This could not easily have been discovered with the low resolving powers of harmonic analysis used by earlier writers.9
In order that this account of the origin of brain waves should be tenable, we must examine the brain for the existence and nature of the oscillators postulated. Professor Rosenblith of M.I.T. has informed me of the existence of a phenomenon known as the after-discharge.10 When a flash of light is delivered to the eyes, the potentials of the cerebral cortex which can be correlated with the flash do not return immediately to zero, but go through a sequence of positive and negative phases before they die out. The pattern of this potential can be subjected to a harmonic analysis and is found to have a large amount of power in the neighborhood of 10 cycles. As far as this goes, it is at least not contradictory to the theory of brain wave self-organization that we have given here. The pulling together of these short-time oscillations into a continuing oscillation has been observed in other bodily rhythms, as for example the approximately diurnal rhythm which is observed in many living beings.11 This rhythm is capable of being pulled into the 24-hour rhythm of day and night by the changes in the external environment. Biologically it is not important whether the natural rhythm of living beings is precisely a 24-hour rhythm, provided it is capable of being attracted into the 24-hour rhythm by the external environment.
An interesting experiment which may throw light on the validity of my hypothesis concerning brain waves could quite possibly be made by the study of fireflies or of other animals such as crickets or frogs which are capable of emitting detectable visual or auditory impulses and also capable of receiving these impulses. It has often been supposed that the fireflies in a tree flash in unison, and this apparent phenomenon has been put down to a human optical illusion. I have heard it stated that in the case of some of the fireflies of Southeastern Asia this phenomenon is so marked that it can scarcely be put down to illusion. Now the firefly has a double action. On the one hand it is an emitter of more or less periodical impulses, and on the other hand it possesses receptors for these impulses. Could not the same supposed phenomenon of the pulling together of frequencies take place? For this work, accurate records of the flashings are necessary which are good enough to subject to an accurate harmonic analysis. Moreover, the fireflies should be subjected to periodic light, as for example from a flashing neon tube, and we should determine whether this has a tendency to pull them into frequency with itself. If this should be the case, we should try to obtain an accurate record of these spontaneous flashes to subject to an autocorrelation analysis similar to that which we have made in the case of the brain waves. Without daring to pronounce on the outcome of experiments which have not been made, this line of research strikes me as promising and not too difficult.
The phenomenon of the attraction of frequencies also occurs in certain non-living situations. Consider a number of electrical alternators with their frequencies controlled by governors attached to the prime movers. These governors hold the frequencies in comparatively narrow regions. Suppose the outputs of the generators to be combined in parallels on busbars from which the current goes out to the external load, which will in general be subject to more or less random fluctuations due to the turning on and off of light and the like. In order to avoid the human problems of switching which occur in the old-fashioned sort of central station, we shall suppose the switching on and off of the generators to be automatic. When the generator is brought to a speed and phase near enough to that of the other generators of the system, an automatic device will connect it to the busbars, and if by some chance it should depart too far from the proper frequency and phase, a similar device will automatically switch it off. In such a system, a generator which is tending to run too fast and thus to have too high a frequency takes a part of the load which is greater than its normal share, whereas a generator which is running too slow takes a less than normal part of the load. The result is that there is an attraction between the frequencies of the generators. The total generating system acts as if it possessed a virtual governor, more accurate than the governors of the individual governors and constituted by the set of these governors with the mutual electrical interaction of the generators. To this the accurate frequency regulation of electrical generating systems is at least in part due. It is this which makes possible the use of electrical clocks of high accuracy.
I therefore suggest that the output of such systems be studied both experimentally and theoretically in a manner parallel to that in which we have studied the brain waves.
Historically it is interesting that in the early days of alternating-current engineering, attempts were made to connect generators of the same constant-voltage type used in modern generating systems in series rather than in parallel. It was found that the interaction of the individual generators in frequency was a repulsion rather than an attraction. The result was that such systems were impossibly unstable unless the rotating parts of the individual generators were connected rigidly by a common shaft or by gearing. On the other hand the parallel busbar connection of generators proved to have an intrinsic stability which made it possible to unite generators at different stations into a single self-containing system. To use a biological analogy, the parallel system had a better homeostasis than the series system and therefore survived, while the series system eliminated itself by natural selection.
We thus see that a non-linear interaction causing the attraction of frequency can generate a self-organizing system, as it does, for example, in the case of the brain waves we have discussed and in the case of the a-c network. This possibility of self-organization is by no means limited to the very low frequency of these two phenomena. Consider self-organizing systems at the frequency level, say, of infrared light or radar spectra.
As we have stated before, one of the prime problems of biology is the way in which the capital substances constituting genes or viruses, or possibly specific substances producing cancer, reproduce themselves out of materials devoid of this specificity, such as a mixture of amino and nucleic acids. The usual explanation given is that one molecule of these substances acts as a template according to which the constituent’s smaller molecules lay themselves down and unite into a similar macromolecule. This is largely a figure of speech and is merely another way of describing the fundamental phenomenon of life, which is that other macromolecules are formed in the image of the existing macromolecules. However this process occurs, it is a dynamic process and involves forces or their equivalent. An entirely possible way of describing such forces is that the active bearer of the specificity of a molecule may lie in the frequency pattern of its molecular radiation, an important part of which may lie in infra-red electromagnetic frequency or even lower. It may be that specific virus substances under some circumstances emit infra-red oscillations which have the power of favoring the formation of other molecules of the virus from an indifferent magma of amino acids and nucleic acids. It is quite possible that this phenomenon may be regarded as a sort of attractive interaction of frequency. As this whole matter is still sub judice, with the details not even formulated, I forbear to be more specific. The obvious way of investigating this is to study the absorption and emission spectra of a massive quantity of virus material, such as the crystal of the tobacco mosaic virus, and then to observe the effects of light of these frequencies on the production of more virus from existing virus in the proper nutrient material. When I speak of absorption spectra, I am talking of a phenomenon which is almost certain to exist; and as to emission spectra, we have something of the sort in the phenomenon of fluorescence.
Any such research will involve a highly accurate method for the detailed examination of spectra in the presence of what would ordinarily be considered excessive amounts of light of a continuous spectrum. We have already seen that we are faced with a similar problem in the microanalysis of brain waves, and that the mathematics of interferometer spectrography is essentially the same as that which we have undertaken here. I then make the definite suggestion that the full power of this method be explored in the study of molecular spectra, and in particular in the study of such spectra of viruses, genes, and cancer. It is premature to predict the entire value of these methods both in pure biological research and in medicine, but I have great hopes that they may be proved to be of the utmost value in both fields.