Chapter 6
Physical and Biological Applications
6.1. Introduction
IN
this chapter we give examples of models in areas with emphasis on the physical and biological aspects of modeling.
Some of the earliest applications of mathematical models have occurred in the physical sciences. It is by reference to such applications that Eugene Wigner, the well-known physicist, has spoken of the “unreasonable effectiveness of mathematics in the natural sciences.” Much of that mathematics has a hard core of calculus. As the applications are extended to the biological and social and behavioral sciences, there is an attempt to use the same tools, but they do not yet seem to be as effective in the analysis of these subjects as they have been in the natural sciences.
6.2. Natural Sciences
We shall consider examples from a number of areas in the natural sciences.
Example 1.
Models in Astronomy—The historical importance of information
One area of science in which the use of mathematical models has been essential is the field of astronomy and its cousins astrophysics and cosmology. The problem with astronomy is that observations are necessarily indirect. We must analyze the visual light, radio waves, X-rays, and γ-rays which were emitted from celestial objects millions of years ago because we are subject to the enormous time lags produced by the snail-like pace with which light crawls across the cosmos.
Let us take a case in point. The presence of various elements in the atmosphere of a star induces the absorption of corresponding wavelengths from the light which the star radiates, and this can be observed by the existence of dark bands at these specified wavelengths in the spectrum of the star. Similarly, such lines are observed in the spectra of entire galaxies.
The red shift
(an astronomical, not political, phenomenon) is simply that all these lines occur slightly more toward the red end of the spectrum than one would expect. (The location of these lines depends on the amount of energy involved in the transition of an electron from one quantum state to another and so is precisely determined.) Furthermore, it appears that the farther away the object observed, the more pronounced its red shift.
There are several explanations which help account for this phenomenon. One holds that the entire universe is expanding and that the red shift is nothing more than an optical
Doppler effect. The galaxies are visualized as lying on the three-dimensional “surface” of the universe like points on the surface of an expanding balloon. (The Doppler effect is the rising in pitch of an approaching train whistle or police siren and the analogous falling in pitch when the locomotive or police car is receding.) Other suggestions include relativistic aspects and also the existence and density of intergalactic dust and gas.
Obviously, the astronomer's task is very difficult. In fact, given essentially more variables than equations, how does he get anywhere? The technique followed by a successful astronomer is really just that of a good detective (or, for that matter, a good bridge player). He guesses a lot and then tries to confirm or refute his hypotheses.
Analysis of the exact causes of the red shift is very important to cosmology—the theory of the nature of the universe. Astronomers hope to be able to announce a winner in the race between the “big-bang,” “steady-state,” and “pulsating-universe” theories by studying the red shift of quasars, which are intensely bright and incredibly distant celestial objects scattered fairly evenly throughout the known universe. It might also be possible to determine what the shape and size of the universe is by such observations.
In 1914 the American astronomer Vesto Melvin Slipher presented before the American Astronomical Society at Evanston, Illinois, a paper with slides clearly showing the rapid movement of galaxies as indicated by the red shift. In 1916 the Dutch astronomer Willem de Sitter found a solution to Einstein's general relativity equations that predicted an expanding universe in which the galaxies moved rapidly away from each other. A similar solution was later discovered by the Russian mathematician Alexander Friedmann by finding an error in Einstein's algebra! (Einstein had divided by zero.) In 1929 Edwin Hubble formulated as a result of experimental work at the Mount Wilson Observatory, the law named after him: “The farther away a galaxy is, the faster it moves.”
In 1965 Arno Pinzias and Robert Wilson of the Bell Laboratories discovered that the earth is bathed in a faint glow of radiation coming from every direction in the heavens.
These discoveries have tended to confirm the big-bang theory that the universe started from one dense mass and spread outward by a cosmic explosion (or creative act), 20 billion years ago, first forming a white-hot fireball. Its radiation would never have disappeared, and that is the measurable radiation discovered at the Bell Labs. Most of the laws of nature may have meaning only in the new expanded universe.
In physics, gravitation, motion, light, space, and time have so far served as basic entities to explain observed mechanical and relativistic occurrences within natural law. However, the interaction of elementary particles has needed a new unifying master theory to combine the four forces observed to operate in the background. They are gravitation and electromagnetism with an unlimited range of influence and two other forces which cannot be perceived directly because of a range of influence limited to atomic nucleii. One is the strong force which binds together the protons and neutrons (and their constituent quarks) in the nucleus. The other is the weak force responsible for the decay of certain particles.
Among the most interesting events in recent geological history are the glaciations which have periodically inundated large portions of the earth in a torrent of ice. Many explanations have been advanced. Some scientists ascribe glacial epochs to variations in the composition of the earth's atmosphere, i.e., more or less carbon dioxide or dust. (This explanation, whether or not true in the past, may well apply to the future of an increasingly polluted planet.)
Other speculations focus on more astronomical causes. Periodic sunspot activity is known to affect average terrestrial temperature, decreasing it by about 2°F. Other
variations in solar activity could account for more significant temperature fluctuation.
Finally, we can investigate the influences of the earth's orbit and rotation, the movement of the earth's axis which rotates to form a cone over a period of about 26,000 years, and variations in the eccentricity of the orbit. Using the Stefan–Boltzmann formula
where L
1
and L
2
denote the amount of heat received and T
1
and T
2
represent corresponding surface temperature in degrees centigrade above absolute zero, we can compute the temperature of the earth subject to astronomical fluctuations. In fact, calculations of periods of minimum terrestrial temperature, using only these astronomical factors, have corresponded very well with periods of glaciation.
Note, however, that some geological input is still necessary, for there is evidence that the present sequence of glaciations does not extend back indefinitely. Geologists explain this by pointing out that only during mountainous stages of the earth's history (and we are still in one now) are the conditions right for formation of glaciers during increasingly cold periods.
Our next example is from the field of chemistry.
Example 2.
Markov Chains and Chemical Processes
The following example shows how matrices and Markov chains can be used to study first-order chemical processes. Consider a hypothetical reaction A→→B
and assume that at any time during the reaction the probability that a molecule A
changes to a molecule B
is 0.1. The probability that a molecule A
will remain unchanged during the same period of time is 0.9. Assuming that the reverse relation of B→→A
is not allowed, the probabilities describing the transitions of the system may be represented by the matrix
Suppose that the initial state is
A
= 1 and
B
= 0, then the state after a single transition will be

and the state after a second transition will be
or
and so on until we reach the
t
th transition. This can be generalized by replacing the probability 0.1 by
k
and allowing
t
, the number of transitions, to represent the number of units of time. The probable state of the system after
t
transitions is given by

.
For a system of N
molecules A
, the fraction of molecules that remain like A
after t
transitions is
If k
is small and t
is large, we can approximate this by
from which we have

. This is a well-known equation for a first-order process in which during a time of length
t
0
the probability of transition from
A
to
B
is
kt
0
.
The same approach can be used in reactions involving more than two types of molecules or in reversible reactions.
6.3. Oxygen and Blood Circulation
Not too long ago, engineers began to look at the human body as a machine, and the science of biomedical engineering was born. Because of their novelty and the imagination used in their formulation, we shall give a number of models developed in this area. Several of these examples have been developed for the field of robotics (the design of machines capable of carrying out human functions).
As an example of a machine which functions in vivo
, we might cite the pacemaker, while the artificial kidney is much too large for artificial implantation. Both machines perform vital bodily functions for people whose bodies need assistance at these tasks. Indications are certainly that the present trend will continue. For example, one goal of current research is to design a machine which will monitor the blood–sugar level in diabetics and automatically administer an appropriate dose of insulin. (In fact, the machine has already been developed and successfully tested on animals, but it is not yet small enough to be implanted or worn by a diabetic.) The ultimate in such machines would be the “medikit” of Murray Leinster or the “autodoc” of Larry Niven, devices which will be familiar to science-fiction devotees.
We now give some simple examples.
Example 1.
Diffusion in the Body
The process by which a gas or liquid spreads throughout another medium is called diffusion. We give two direct applications of the principles of diffusion of gases and liquids in the body; the first application, by Danziger and Emergreen, develops a simple model of the endocrine system; the second application, by Defares and Sneddon, analyzes the oxygen debt in the body after exercise.
Danziger and Emergreen consider a system of n
components whose concentrations, x
1
…x
n
, in the body are functions of time. This leads to the system of equations
where λ
i
is the loss rate per unit concentration of component i
and Q
i
is the production rate of x
i
. This production rate may be approximated by
where A
i
0
is zero or a positive constant denoting independent production of x
i
, the A
ij
are sensitivity constants which may be zero for no effect, positive for production stimulation, or negative for production inhibition, and A
ii
= 0.
Defares and Sneddon explain how the measurement of oxygen debt is used as a heart function test. During strenuous exercise the body can withstand an oxygen debt x
(the muscles are able to work without oxygen supply θ
) but in the recovery stage oxygen is needed to replenish the energy stores used up during exercise. They assume that the oxygen debt is proportional to the work W
which is done, i.e., x
= αW
, and also that dx/dt – dθ/dt = αdW/dt
, i.e., the oxygen debt also depends upon the oxygen supply. Finally, we assume that the extra oxygen uptake per second (at the lungs) is proportional to the oxygen debt existing at any instant, dθ/dt = kx
,
to obtain
Various functional forms may be chosen for P
(t
) in order to solve for x
; this allows dθ/dt = f
(t
) to be calculated. f
(t
) can be measured to provide a check on the model.
We now describe some physical models of the lung.
*Example 2.
Models of the Lung
The lung functions in a variety of ways. It acts as a bellows pumping air in and out of the body, but it also includes a membrane which permits the osmosis of oxygen into the blood. Different models of the lung may reflect or emphasize these distinct but interrelated functions.
The model developed by Collins, Kilpper, and Jenkins is based on a physical analog of a piston and springs and is used to analyze the mechanical factor in respiration.
They created a physical analog of the lung thorax system as shown in
Fig. 6.1
. They explain that the gravitational and active muscular forces on the thorax give rise to a pressure
P
x
.
These forces are equivalent to an external pressure acting inward on the thorax in addition to the atmospheric pressure
P
0
. Acting outward is the pressure
P
p
in the fluid lubricated pleural space. Since the algebraic sums of these forces are balanced by the stress due to the elastic property of the thorax, it makes sense to write, after some reflection,
where V
is the lung alveolar volume, EV
T
is the equilibrium volume the lungs would assume if the elastic forces of the thorax were the only forces acting, and C
T
is the compliance of the thorax.
A similar equation describes the equilibrium of forces on the lungs:
FIG
. 6.1. Single-compartment model of a lung-thorax system: (a) lung volume V
; (b) piston represents pleural wall of lungs; (c) springs represent lung compliance C
1
; (d) lower airway resistance R
1
varies with lung volume; (e) intermediate and upper airway resistance R
2
may vary with pressure; (f) fluid-filled pleural space; (g) springs represent compliance of thorax C
T
; (h) piston represents pleural wall of thorax; (i) force of shaft corresponds to muscular and gravitational force on thorax P
x
.
where P
is the alveolar air pressure in the lungs, EV
L
is the equilibrium volume the lungs would assume if the elastic force of the lungs were the only force acting, and C
L
is the compliance of the lungs.
The air flow from the lungs is accompanied by a pressure gradient along the bronchial airways. Consideration of the total pressure drop across a network of tubular airways gives rise to
where R
1
, R
2
are resistance parameters, P
m
is the pressure at the mouth, and dQ/dt
is the volume flow rate of air from the lungs measured at mouth pressure.
Finally, consideration of the compressible nature of air using the ideal gas law and assuming isothermal conditions leads to the equation
The above equations form a system of differential equations which constitute the basic description of the lung model.
We now look at the flow of blood as a physical system.
Example 3.
Blood Flow
The flow of blood in the body can be likened to the flow of liquid in a system of pipes. We can develop some surprisingly accurate a priori
bounds for the radius of the aorta.
The turbulence may be measured by the Reynolds number R
, which should not exceed the value 1100.
Let
a measure of turbulence in the aorta, where V
is the mean velocity of blood flow, r
is the radius of aorta, δ
is the blood density, and n
is the viscosity of blood.
If C
is the average blood flow in cm3
sec– 1
, then
From (
6.1
) we obtain for the critical (or minimal) radius
or
As an example, for a dog with Cδ
= 40gsec – 1
and n
= 0.0269 gcm – 1
sec – 1
, r
* = 40/1100πn
= 0.43cm, while the actual value is r
= 0.5cm.
*6.4. Medical Application
We give a simple medical application concerning X-rays.
Example.
Constructing Three-dimensional X-ray Pictures
X-ray photographers are two-dimensional and from one photograph one cannot infer the relative depths of the objects shown in the photograph. Suppose we take photographs from many different angles; is it possible to reconstruct reliably the X-ray density of any point in the body we are photographing? Although the solution to this involves mathematics which is not quite trivial, it will be seen that the mathematics is itself of practical importance. (This example was communicated to Saaty by Peter Bunemann.)
First of all consider a parallel X-ray beam running parallel to the x
-axis penetrating a body whose X
-ray density is given by the function ρ
(x, y, z
). The energy absorbed in a small thickness dx
of the body gives us
where I
is the intensity of the beam.
Therefore
If we let R
(x, y, z
) = log I
(x, y, z
) we have
and
For convenience, we drop the z
coordinate.
FIG
. 6.2.
Figure 6.2
shows a parallel beam now inclined at
θ
to the
x
-axis, penetrating a body of finite extent, and falling on a photographic plate which, since we have dropped the
z
coordinate, is to be thought of as one-dimensional. If the initial log intensity is
R
0
and if the log intensity at position
m
on the plate is given by
R
m, θ
, then from the preceding analysis,
We can take an infinite integral because we have assumed that the body is of finite extent, i.e., ρ
vanishes outside some region. The function f
, as defined, can easily be computed from our knowledge of the initial intensity and from the optical density of the developed plate.
Now we compute the Fourier transform of f
:
The importance of this last expression may not be immediately apparent. It is the two-dimensional Fourier transform of the density ρ
, measured at position (μ
cos θ
, μ
sin θ
) of Fourier transform space. Since it is possible to invert Fourier transforms, we can compute the original X-ray density, if we can measure f
for all values of m
and θ
. It might be thought that there are less sophisticated methods of computing the density; and indeed there are other methods involving convolution integrals. However, the Fourier transform method is of practical importance, as Fourier transforms can be very rapidly computed by digital methods. In practice one would make discrete approximations to the various functions. It would also be more efficient to use an array of X-ray sensors rather than a large number of photographic plates.
*6.5. Muscular Control
We now give some simple examples of muscular control.
Example 1.
Muscular Movement
Although the man-machine analogy has not yet enabled us to predict or explain human behavior, it has helped in understanding the working of the body.
Nubar and Contini consider the dynamics of the human body in motion and at rest to derive a model of the body based upon theoretical mechanics. Considering the movement of the body in two dimensions, they apply the classic equations of dynamics (Newton's laws) to the segments of the body to obtain a system containing the following time-dependent unknowns:
(a) the coordinates of some definite point in the body, relative to some fixed axes, to which all other points may be referred;
(b) the orientations (angles) of all body segments, relative to the fixed axes,
(c) the moments at the ends of the segments at the joints; and
(d) reactions at all support points, but one.
It is assumed that the following quantities are given:
(e) the applied forces (weights);
(f) the physical characteristics of the segments (length, mass, center of gravity);
(g) the initial values of the unknowns and their first derivatives.
Since the moment at a joint is proportional to the muscle tension, we can define muscular effort as a function of the product of a joint moment and its duration. The simplest expression is CM∆t
, where M
is the moment at the joint, ∆t
the time duration, and C
a numerical factor. Because muscular effort may be negative (in which case the joint moment is negative), the expression CM
2
∆t
is adopted as a measure of muscular effort at a joint.
Assume that an individual will move (or adjust his position) in such a way as to reduce his total muscular effort to a minimum, consistent with the constraints. Thus, he seeks to minimize the effort
subject to the equations of motion.
To illustrate the operation of the principle of minimum effort in removing the indeterminacy of the equations, let us consider a simple example of a human being standing at rest with one foot on the ground (
Fig. 6.3
). It consists of five rigid parts: arms, legs, and trunk. The physical characteristics of the segments (length
l
1
, mass
m
1
, moment of inertia about the center of gravity
I
1
, moment of inertia about one end
I
1
a
) are known. The location of the center of gravity of every segment is also known to be at a distance
k
i
l
i
from a joint, where
k
i
is a numerical factor. The system, fixed at point
A
, leaves the primary unknowns as the five orientations, α
1
…α
5
with respect to the vertical and the four independent joint moments,
M
1
B
,
M
2
B
,
M
4
D
,
M
5
D
. These nine unknowns are underlined in the diagrams.
The simplest forms of the moment equations are found by considering that every segment is in equilibrium under the effect of inertial forces and inertial moments, plus the forces and moments applied at its end as reactions (d’Alembert principle). Minimization of
E
with respect to the equations of motion produces the following solution:
FIG
. 6.3.
and
and
s
0
is a constant. For the case of an individual 5 ft 9 in, in height, weighing 160 lb, the solution is shown in
Fig. 6.4
.
*Example 2.
Optimal Gaits
There are also models to determine optimal step-size or gait. We present two models, the first by Smith who determines the optimal gait of an animal (human) traveling fast on level ground and the second by Rashevsky who studies the optimal step-sizes for a human walking on level ground and uphill.
Smith states that when traveling fast (running, galloping, trotting), an animal (human) spends part of the time with all legs off the ground and part with one or more legs on the ground. The path of an animal's center of gravity is shown in
Fig. 6.5
.
FIG
. 6.4.
It has been verified experimentally that
(where d, h
, and b
are as shown in figure) which would be exact if the upward acceleration were uniform during the stepping phase.
FIG
. 6.5.
The criterion chosen to determine the optimal gait is the minimization of the power output. The total work done in a single stride is
where W
f
is the work done in raising the center of gravity and W
s
is the work done in accelerating the legs.
The time which elapsed between leaving the point B
and the highest point of the floating phase is (b
/2V
), where V
is the velocity and, hence
Thus
The work done in accelerating the legs can be determined by replacing the mass of the whole limb by an equivalent mass m
′ at the foot
Hence
and the work done per unit time (since the time to complete a stride is
a
+
b
/
V
) is given by

, where
Now, if b
is replaced by ja
, where j
can be regarded as a measure of the jumpiness of the gait and L
represents a linear dimension of the animal in question (e.g., height, length of a limb), then m, m
′, are proportional to L
3
, a
is proportional to L.
and
where
Thus, 1 + j
is proportional to V
2
/ L
and hence j
increases with V
and decreases with L.
Some representative results are j
= 0 for an elephant, j
= 0.3 for a horse, and j
= 1 for a greyhound. However, j
does not always change as rapidly with L
as the above examples suggest: it seems that there are additional criteria determining the optimum besides the minimization of power output.
Rashevsky considers a slightly different approach. He first defines the following variables:
s
|
= |
the length of the step, |
n
|
= |
the number of steps per unit time, |
v
|
= |
velocity of walking defined by n s
, |
∆
|
= |
the distance that the center of gravity is lifted, |
M
|
= |
mass of the body, |
g
|
= |
acceleration constant. |
The power loss in walking n
steps is then given by
He argues that the quantity ∆ is determined by a combination of the length of the step and the length of the legs. Considering the legs as rigid, the difference ∆ between the highest and the lowest positions of the center of gravity is
where
l
is the length of the legs and
θ
is angle between position of leg and vertical. However, if the position of the legs is approximated as shown in
Fig. 6.6
, then
FIG
. 6.6.
giving
and
For steps which are not too large,
s/l
<

and
s
2
/ 4
l
2
< 1/16. So we may approximate
The power loss due to the imparting of kinetic energy to the swinging extremity is found to be
where I
is the moment of inertia of the extremity with respect to the hip joint.
Thus, total power loss is
where m
is the mass of the limb.
The criterion chosen to determine the optimal step-size s
0
is to minimize the total power loss. The result is
which depends upon v.
*Example 3.
Best Velocity in a Running Race (control theory)
How does an athlete run a race? That is, how does he husband his efforts in order to cover the distance D
in the shortest possible time? Intuitively, it is clear that the athlete's optimal strategy is dependent upon D.
When D
is small, the best course is to run “flat out” since the runner will not exhaust all of his energies. If D
is large, on the other hand, he must “pace” himself so that by the end of the race, but not before, he will have expended all of his strength.
Keller shows one way to model this highly practical physical situation. We want to advise the runner how to vary his speed v
(t
) during a race of distance D
in order to minimize the time T
required. D, T
, and v
are related by the equation
The velocity v
satisfies
where v/τ
is the resistance per unit mass (τ
is a given constant) and f
(t
) is the thrust per unit mass.
Initially
The force f
(t
) is controlled by the runner but cannot exceed a constant F
, so we have
If E
(t
) denotes the mass of oxygen (per unit mass) available to the athlete's muscles, then since oxygen is metabolized by the body to produce energy and energy is used at the rate fv
by the athlete, we have
where σ
is the rate at which oxygen is supplied by breathing and circulation. Initially,
and, since E
(t
) is never negative,
The athlete's problem can now be described as follows: find
v
(
t
),
f
(
t
), and
E
(
t
) satisfying (
6.3
) through (
6.8
) so that
T
, defined by (
6.2
) is minimized. The physiological parameters
τ
,
F
,
σ
, and
E
0
and the distance
D
are specified in advance.
We combine (
6.3
) and (
6.5
) to give
We can also use (
6.3
) to eliminate
f
from (
6.6
). Integrating the resultant equation and using the initial condition (
6.7
) yields
Combining this with (
6.8
) gives
Since we have expressed
f
and
E
in terms of
v
, our original problem can be expressed as follows: Find
v
(
t
) satisfying (
6.4
), (
6.9
), and (
6.11
) so that
T
is minimized.
Now minimizing T
subject to fixed D
is equivalent to maximizing D
with T
given, which is the formulation we shall consider.
To begin with, we can assume f
(0) = F.
(The rate fv
of doing work is 0 initially since v
(0) = 0 and so f
(0) can be taken as large as possible.) Hence, f
(t
) = F
for 0 < t
< t
1
, where 0 < t
1
< T
and t
1
is as large as possible.
It is clear intuitively (and confirmed by analysis) that for T
not too large, t
1
= T
. The critical value for T
is shown to be T
= T
c
, where T
c
is the unique positive root of the equation
If T
> T
c
, then we must consider the “other” end of the interval 0 < t
< T
. Just as we assumed f
(0) = F
, we can also assume E
(T
) = 0 (for otherwise he could have run harder). Thus, E
(t
) = 0 for t
2
< t
< T
, where t
1
< t
2
< T
and t
2
is as small as possible.
It is very interesting to note that when D
is very much larger than D
, where D
c
is the distance corresponding to T
c
, then we may assume t
1
= 0 and t
2
= T
. One can verify the validity of this assumption analytically.
On a heuristic basis, this just means that one should try to run steadily over a long distance.
Using the preceding formulae and the results of actual races, Keller has calculated the values of
τ
and
F
and, using them, the values of
σ
and
E
0
.
D
c
is shown to be 291m, a not unreasonable value. In
Fig. 6.7
the curve represents the calculated value for average velocity
D/T
for
D
< 2000m; the points are derived from world records.
A final caveat is in order here. From the fact that a model appears to fit the observed data, one should never allow oneself to conclude that the model must be correct. Only a failure to meet nature gives one the right to draw a conclusion and then it must be to reject
the model. While Keller's model fits the observed data quite well, his assumptions do not include a direct dependence of F
on E
(t
), i.e., on t.
Yet one can certainly argue that the maximum F
available to an athlete does depend on the time.
FIG
. 6.7.
In another paper dealing with the mathematics of athletics, Brearley has shown by a careful mathematical analysis that the sudden increment of over 2.5 ft in the new world's record for the long jump established at the 1968 Olympic Games in Mexico City could not be attributed to the decreased air resistance one encounters at higher altitudes, since an upper bound to the possible increment would have been 2.5 in.
6.6. Weight Control
One area which draws on both physics and biology is weight control.
Example.
Weight Control and Energy (Lloyd P. Smith)
Body weight is determined by the energy taken in and the energy ejected from the body as heat. The energy E
i
taken in over a 24-hour period is transformed to other forms, stored as fat, or ejected as heat. The law of conservation of energy gives for ∆E
the energy stored per day: ∆E = E
i
– I – E
0
, where I
is the energy required to sustain life and E
0
is the average energy which leaves the body during a 24-hour period. A pound of weight is added whenever ∆E
= 3000 Kcals (kilogram calories, which are popularly called calories. It is the amount of heat required to raise one kilogram of water one degree centigrade or 1.8 degrees Fahrenheit. It takes 3086 ft-lb of exercise to equal one kilocalorie of body energy.) If the ideal weight in pounds is W
0
and the weight is W
, we have W – W
0
= ∆E
/3000. Now for each pound of weight the body needs an additional 14 Kcals per day to maintain it at body temperature and carry it around. Thus the daily rate of change of body weight is
whose solution is given (in pounds) by:
If we assume that ∆E
is constant from day to day, the equilibrium body weight is obtained by putting dW/dt
= 0 as the weight W
will no longer change with time. We have W
– W
0
= ∆E
/14. Thus if ∆E
= ± 200, ± 400, ± 600, ± 800 Kcals per day, W
– W
0
= 14.3, 28.6, 42.8, 57.1 lb gained or lost. The time required to gain or lose one pound is 15.5, 7.6, 5.1, and 3.8 days.
Now we study the factors which determine E
i
, I
, and E
0
. For a 25-year-old man with 150 lb weight, sleeping or lying down, surrounded by air of 68°F, on the average, I
is 70 Kcals per hour. For a woman weighing 128 lb, I
is 62 Kcals per hour. This would be the absolute minimum for E
i
. If we assume that I
is a constant for a given individual we are left with E
i
– E
0
, which affects ∆E
, to control; gain if positive, loss if negative. Now E
i
could arise from the following kinds of gain: (1) energy in chemical form (food or drink) through the stomach (the most important); (2) heat energy absorbed from surroundings; (3) heat from hot food or drink; (4) sunbathing; (5) energy from mechanical work on body, such as massage.
1. Chemical energy is converted to heat by the liver and involuntary muscles, the latter through tensing, relaxing, standing, shivering, or responding to emotion and distributed to the body through the blood stream and not by conduction since the core temperature of the body, 98.6°F, is the same everywhere. The caloric content of most foods is known in the literature. Conversion processes are controlled by large protein molecules called enzymes maintaining the internal environment of the body cells as constant as possible. Enzyme deficiency may require a higher value for E
i
than I
to maintain basic body energy needs. Different people's food consumption depends on the efficiency of their enzymes. An enzyme called Steapsin decreases the size of normal fat particles so they can pass through the intestinal wall.
2. Unless the heat surrounding the body is very high, the body does not absorb heat. High temperature tends to prevent the body from getting rid of heat.
3. Heat would be distributed to the body if the drink is warmer than 98.6° F and taken from the body in the opposite case to raise the temperature of the liquid.
4. Although the contributions of sources under 2 and 3 are negligible, not so with solar energy. There are 14.3 Kcals per minute incident on 1 square meter of surface. The human body has 1.55 square meters; if we use half of it we obtain 11.17 Kcals per minute or 670 Kcals per hour. Although some would be reflected, 70% will be absorbed, i.e., 469 Kcals per hour, which is high when compared with the 70 Kcals per hour needed for basic energy requirements. It must first be ejected before calling upon stored energy as fat. The figures here may seem high, but even if we take 40% of these values, they indicate that obese people might well be advised to stay out of the sun.
5. The direct result of massage, which requires considerable mechanical energy, is negligible except for breaking down fat particles to make it easier for the body to oxidize them.
Decrease in weight is due to energy loss. There is loss from:
1. Increasing the temperature of ingested food, e.g., a 12-ounce glass of ice water requires 13 Kcals to bring it up to body temperature, or a reduction of 0.0043lb. It would take 233 glasses of such water to lose one pound.
2. Loss of moisture exhaled at body temperature—a less-efficient mechanism than in 1.
3. Elimination of liquid and solid waste. Difficult to estimate because of the varying enzymatic activities of different people.
4. Energy required to stand or sit. Sewing or clerical work require 24 Kcals per hour, standing around, 92 Kcals per hour. The loss is not significant.
5. Doing significant mechanical work such as exercising. Consider a 150-lb man walking at 2 miles per hour. Assume that the body is raised 1 in. on each stride of 2 ft length. The arithmetic gives

. The measured value of energy expended for this purpose is 115 Kcals. Thus the body generates 93.6 Kcals of heat to perform 21.4 Kcals of mechanical work. If the same man runs at 8 miles per hour, with a 2-ft stride and raising the body 2 in., the result is 171 Kcals per hour, but the measured energy expenditure is 730 Kcals of heat, i.e., 559 Kcals of heat to perform 171 Kcals of mechanical work. The weight loss is 0.0072 lb in the first case and 0.057 lb in the second for the mechanical work and 0.038 lb and 0.243 lb, respectively, from heat loss—5.3 and 4.3 times the mechanical work done. The lungs of a person at rest draw in and expel 500 cm
3
per breath. Strenuous exercise raises this figure by tenfold and increases the breathing rate—with considerable heat loss.
6. Loss of heat through body surface—the most important means of energy loss. The layer just under the skin consists of adipose tissue (thicker in women) storing fat globules which serves as an insulator. Fat deposits are being constantly used and reformed. Fat is stored in tissues least influenced by muscular activity: the abdominal region, the waist, neck, and the buttocks. (The hands act as an excellent medium of heat exchange—they get hot or cold as necessary.) If the skin temperature drops below 91.4°F or if the body temperature drops below 98.6°F, the blood vessels in the surface fat contract, decreasing heat flow through the fat layers to the surface and heat is generated in the body by conversion of food or stored fat. Thus the surface temperature must be as near as possible to body core temperature to stimulate evaporative cooling by the presence of water on the body surface. We estimate heat and weight losses by heat conductivity and by evaporative cooling.
First let us estimate the heat loss by heat conductivity when the blood circulation keeps the temperature of the adipose tissue 0.05 cm below the body surface at the body temperature of 37°C and the skin temperature T
s
°C. The rate of heat loss would be
where K
is the heat conductivity of the thin layer of adipose tissue which is taken as 0.00044 cal/cm2
per °C. Taking the skin temperature as 20° C or 68° F,
If the blood vessels in the adipose tissue were completely contracted and the adipose tissue is 2 cm thick,
which indicates the kind of control that the autonomic nervous system can exert on the magnitude of heat loss.
As will be indicated in the following calculation, the largest amount of heat loss can be effected by evaporative cooling. To estimate this we use the formula giving for the number
of molecules of water evaporating per cm2
per sec:
where N
e
is the number of water molecules that will evaporate per sec per cm2
when there is a layer of water (perspiration) on the skin at an absolute temperature T
in °K which we will take as the body temperature of 37°C or 310°K. v
is the average velocity of a water molecule in the liquid at temperature 310° K and is given by 1/2 Mv
2
= 3/2kT. n
is the number of water molecules per cm3
of liquid, R
is the ratio of the number of molecules hitting the surface of the water from the vapor phase that recondense to those that are reflected. For our purpose, R
= 1, k
is Boltzman's constant = 1372 × 10– 16
ergs/degree, and v
is the energy in ergs that must be expended in removing a water molecule from the liquid state to the vapor state at a temperature of 37°C. v
= 725 × 10– 15
ergs. Using these values, N
e
= 3.13 × 1023
molecules/cm2
per hour = 9.35 g of water per hour per cm2
. This can be translated into the number of Kcals removed from an area of 1 cm2
per hour from the body by evaporative cooling. It is H
e
= 5.45 Kcals/cm2
per hour. Comparing this figure with 0.54 given in conductive heat loss we see that the heat loss by evaporative cooling can be about ten times more effective than the loss by pure heat conductivity.
From these principles we learn that to lose weight E
i
must be less than I
+ E
0
. Since E
i
is mainly determined by the caloric value of ingested food and the absorption of the sun's radiation (sun bathing), these should be controlled. To make E
0
as high as possible an effort should be made to keep the skin temperature as close as possible to the body core temperature and the outside temperature as low as can be reasonably managed.
From tables of E
i
and E
0
it can be concluded that the value of ∆E
can be much more influenced by one's exercise program than the value of E
i
unless one insists on consuming large quantities of high calorie foods like fat meats, pies, and rich cakes, roast chicken, cheeseburger, pizza, and recreational beverages.
With this method of weight control, one knows precisely the factors to adjust and how effective each will be in controlling weight with no pills, some of which could have harmful side effects, and are not necessary nor desirable. Also, one's body will increase in physical fitness and be capable of increased accomplishment.
6.7. Cellular and Genetic Applications
We give some examples from the fields of cell biology and genetics. The next two examples show how a simple mathematical analysis of genotypes may be used to study how traits are distributed in a population.
Example 1.
The Hardy–Weinberg Law of Equilibrium
Let A
and a
be dominant and recessive genes, respectively, controlling some physiological trait. Suppose that A
occurs with probability p
in some population while a
occurs with probability q
= 1 – p.
Since every individual inherits two genes for each (nonsex-linked) trait, the possible genotypes are AA, Aa
, and aa.
The Hardy–Weinberg formula, discovered independently in 1908 by G. H. Hardy, a British mathematician, and
W. Weinberg, a German physician, states that in a large population where mating is random—at least with regard to the trait controlled by A
and a
—the distribution of genotypes after one generation is as follows: AA
occurs with probability p
2
, Aa
with probability 2pq
, and aa
with probability q
2
.
It is easy to derive this law from the elementary theory of probability. Let P
(E
) denote the probability of some event E
. If F
A
denotes the event that some individuals’ father contributes A
and M
A
is the corresponding event for his mother, then F
A
and M
A
are independent events since mating is assumed random with regard to A
. Hence, P
(F
A
and M
A
) = P
(F
A
)P
(M
A
) = pp
= p
2
. Thus, the probability that some individual has genotype AA
is p
2
. Similarly, the probability of genotype aa
is q
2
. These genotypes, in which both genes are the same, are called homozygous.
A genotype in which both genes are different is called heterozygous.
If m
denotes the probability that a genotype is homozygous and t
the probability that it is heterozygous, then m
+ t
= 1; that is, t
= 1 – m
. Now m
= p
2
+ q
2
, so t
= 1 – p
2
– q
2
. Finally, note that 1 = 12
= (p
+ q
)2
= p
2
+ 2pq
+ q
2
and, therefore, t
= 1 – p
2
– q
2
= 2pq.
But t
is just the probability that Aa
occurs.
This may also be seen by noting that Aa
can arise in two ways, with A
inherited from the father and a
from the mother, and vice versa.
Note that, as usual, the Hardy–Weinberg model gives a necessary condition—in this case, for random mating. Observe also that mating may be random with regard to some factors (e.g., blood group) but highly selective with respect to others (for example, skin pigmentation). Hardy–Weinberg equilibrium can be used, in conjunction with gene frequency analyses, as a test for nonrandom mating; that is, if the distribution of genotypes differs significantly from their predicted frequencies, then mating is not random with respect to the factor under consideration.
Example 2.
Blood Groups
Following the last example, we can assume that the blood group of an individual is determined by the genes of the two parents. The genes that determine the blood group are of three types, G
A
, G
B
, and g. G
A
and G
B
are dominant genes and g
is the recessive gene. The blood groups O, A, B, and AB are determined as follows:
What is the proportion of genes G
A
, G
B
, and g
in the parent population?
p
is the probability that an individual has gene G
A
.
q
is the probability that an individual has gene G
B
.
l – p – q
= r
is the probability that an individual has gene g.
Assuming that individuals mate randomly, we can find the probability that the offspring has a particular blood group.
We need to find the “best” estimates of p
, q
, r
. One way of defining “best” is that p*, q*, r*
are the best estimates from a sample S
if P
(S
|p
*, q
*, r
*) ≥ P
(S
|p
, q
, r
) for all p, q, r
such that p + q + r
= 1. Such an estimate is termed the maximum likelihood estimate.
Now, if n
is the sample size and n
1
have blood group O, n
2
have group A, n
3
have group B, and n
4
have group AB, where n = n
1
+ n
2
+ n
3
+ n
4
, then L
, the probability that this arises given any p, q, r
, is
We wish to maximize this subject to p + q + r
= 1, i.e., using Lagrange multipliers. The mathematics is cumbersome and we omit the solution.
We now look at an example in chromosome mapping.
*Example 3.
Renewal Theory and Chromosome Mapping [Bailey]
Renewal processes can be used to explain the phenomenon of genetic linkage and chromosome mapping. The points of exchange which occur on a single chromosome strand during the appropriate stage of meiosis can be regarded to follow the renewal process. The concept of breakdowns up to time t
is replaced by the points of exchange that occur along the strand, which is represented by a semi-infinite straight line with origin corresponding to the chromosome's centromere. Assuming that the intervals between successive points of exchange are independently distributed with identical frequency functions f
(u
), the Laplace transform X
*(s
) corresponding to X
(t
), which is the average number of points of exchange in the interval (0, t
) is given by
where
is the Laplace transform of f
.
By putting f
(u
) = e
– u
we obtain
and
The recombination fraction, which is the probability of an odd number of points of exchange in the interval (0, t
) is given by
and its transform is given by
Substituting f
*(s
) = 1/(1 + s
) we get y
*(s
) = 1/2(1 – e
– 2
t
), which is known as Haldane's formula.
Our next example considers the control of protein synthesis.
Example 4.
Protein Synthesis [J. M. Smith]
We discuss methods taken from the kinetics of chemical reactions used to analyze the control of protein synthesis. The simplest model for the control of protein is as shown in
Fig. 6.8
.
FIG
. 6.8.
The mRNA is made in the nucleus, its concentration at any moment being Y
. At the ribosomes the “message” is translated, and enzyme molecules synthesized, their concentration at any moment being Z
. This enzyme catalyzes the reaction from an inactive precursor, concentration P
, to a repressor molecule, concentration M
. The repressor molecule then reacts with the gene, so that when a repressor is attached to a gene no mRNA is made.
If the rate at which mRNA molecules are lost or destroyed is assumed proportional to their concentration, then
where a
is the (constant) probability that any particular repressor molecule will become detached in a given time interval, and b, c
, and k
are constants.
Similarly,
where e
and f
are constants, and
where g
and h
are constants.
These equations can be simplified if one is interested only in Y
and Z
, since the precursor–repressor reaction will reach equilibrium much more rapidly than the others, i.e., dM/dt
= 0. Thus, writing bg/h
= 1,
The important question is whether these equations indicate a sustained oscillation or whether any disturbance is rapidly damped out. The equations can be solved to show that they describe either a damped oscillation or no oscillation, but if a slight modification is made to account for the time lapse while mRNA molecules travel from gene to the ribosome, then the control system is oscillatory.
*6.8. Models related to the Nervous System
Because of its highly complex nature and the difficulty of studying it directly, the nervous system makes an excellent subject for mathematical modeling. A leading authority in this field is Nicholas Rashevsky.
Example.
The Central Nervous System [Rashevsky]
Neurons are connected by synapses
in nets.
Nerve conduction is unidirectional for any neuron; it is chemical in nature and follows the laws of Boolean algebra. Functionally, a pathway
is a group of axons between the brain and an organ in an “anatomically discernible tract.” Pathways from a sense organ to the brain are afferent
; those from the brain to a motor functionary are efferent.
For a weak stimulus, the excitation energy E
along a pathway is assumed to be approximately linear:
where S
is the stimulus intensity and h
is the threshold of the “weakest link” in the pathway.
This is only an approximation since E
cannot increase indefinitely with S.
A better approximation is given by
Some neurons form closed cycles connecting pathways as shown in
Fig. 6.9
.
Once stimulated, the cycle continues indefinitely (reverberates) until chemical failure occurs at random. If ε
is the number of excited cycles in a given pathway, then the failure rate may be expressed by
FIG
. 6.9.
since it has a rate of increase proportional to E
and a rate of decrease proportional to ε
.
Cycles may also inhibit pathways on which they synapse. If j
inhibitory cycles are excited, then
where E
is the intensity of the signal at the higher-order pathway.
If E > j
, then the higher-order pathway exhibits a net excitation, and vice versa.
The solution to (
6.14
) is
plotted in
Fig. 6.10
, together with the extinction curve when the signal
E
is removed at time
t
1
.
The extinction equation (for E
removed) is seen to be
Similar results apply to j.
FIG
. 6.10.
Now consider a simple reflex arc of purely excitatory pathways, one afferent (a.p.) and one efferent (e.p.) joined together. Let a stimulus of intensity
S
be an input to the a.p. resulting in an intensity
E
given, say, by
E
=
α
(
S
–
h
1
), with
h
1
as the threshold of the a.p. Let
τ
A
be the conduction time of the a.p. Then, if
h
2
is the threshold of the e.p., the efferent
excitation begins with
ε
=
h
2
for afferent cycles. We combine this with (
6.16
) to find
t
0
, the time at which the e.p. becomes excited.
or
The total reaction time is thus
where
τ
E
is the conduction time along the e.p. Substituting for
E
from (
6.12
), we obtain
We see that
τ
R
is imaginary for
αA
(
S – h
) <
ah
2
(i.e., for
AE/a < h
2
) since the asymptotic value of
ε
is still less than the threshold of the e.p. For
αA
(
S – h
1
) =
ah
2
the reaction time is infinite; otherwise the time decreases with increasing
S.
[For large
S
, (
6.21
) no longer applies, as
E
is more closely approximated by (
6.13
) than by (
6.12
).] Experimental data have been obtained which closely fit the theoretical results obtained from (
6.21
).
6.9. Evolution
The method used to analyze the process of evolution is taken from the field of kinetics. In physical chemistry we view the progressive changes in a system comprising several chemical elements by enumerating the components, stating their character, and deriving an expression for the instantaneous state of the system in terms of significant parameters; this expression for the instantaneous state of the system is then used to determine its history. For example, if hydrogen, oxygen, and steam are put in a container of volume v
at a given pressure and temperature, then the change in the mass of the steam m
1
is given by
where m
2
, m
3
are the masses of the hydrogen and oxygen, respectively, and k
1
, k
2
are coefficients reflecting the temperature, pressure, and the reaction. Our interest in this equation lies in its general form
and it is this relation that is transplanted into the field of organic evolution.
The masses of the chemical components are replaced by the population of the species and the parameters in the equation represent the environmental conditions, e.g., climate,
topography. In particular, one form of the general relationship is
where N
i
is the population of the i
th species and a
is
represents the interaction between the species.
For single species with a limited food supply, this equation becomes
If we let K = e/a
, we obtain
which shows that the population N
increases geometrically when the species is very rare but gradually levels off toward the equilibrium value K
, the maximum population it can maintain.
6.10. Social Biology
Social factors in biology consist of interspecies and intraspecies interactions, and these factors may often play an important role, e.g., infanticide in certain human societies. Predator–prey relations, in which one group depends upon another for its existence, are quite common and can easily be described mathematically.
We give a simple example.
Example.
Predator–Prey Systems [Bailey]
The predator–prey relationship, similar to that discussed in
Chapter 3
but without a memory term, is closely illustrated in the case of wolves which, for the most part, prey on the “surplus crop” of deer, moose, or caribou. The wolves, like other creatures, eat to sustain themselves. Since they use less energy to pursue a meal than they realize from it, wolves tend to attack the weak, old, or sickly. The general fitness of the deer or caribou herd is heightened, and its population is kept at a level for which there is an adequate food supply. In this way, predator and prey help each other. Should the deer population become thin, wolves must decrease in number, as in fact happens. Campaigns to exterminate the American red wolf to extinction levels created an opportunity for the coyotes, which feed on the same prey, to multiply with greater threat to ranchers than the wolves ever were.
Now for the analysis.
Let N
1
(t
) be the population of the prey, and N
2
(t
) of the predator, at time t.
If N
2
is small, N
1
increases, entailing an increase in N
2
. This increase in N
2
is followed by a decrease in N
1
, which causes starvation and hence a decrease in N
2
. The number of encounters of the two species is proportional to N
1
N
2
. In an encounter one species decreases while the other increases. Thus
and
where a, b, c, d
are all positive. If we divide the first equation by the second, integrate, and then substitute,
we obtain
where c
is the constant of integration. Expanding series about the origin and neglecting terms of higher order, we obtain ellipses that describe periodic variations in the prey population (e.g., a host population) and in the predators (e.g., parasites). These ellipses are given by
where D
is a constant. The period of oscillation near the origin is given by 2π
/(ac
)1/2
.
In the
next chapter
we consider models in the social and behavioral fields.
Chapter 6—Problems
1.
Can you formulate a model of the lung different from the one described in this chapter?
2.
Evaluate the models of muscle control. Can you formulate others?
3.
How would the equation for blood flow be modified if the radius of the “pipe” is very small (e.g. capillaries). Consider the effects on the walls.
4.
A cyclist needs to know how to position himself in order to take into account the wind, the curvature of the path, etc. Formulate a simple model and derive some rules for him.
5.
At what point is it physically possible for a baby to: (a) stand, (or b) walk? Formulate some criteria for this.
6.
Formulate a model for mutations in cells, given that such mutation is an accidental fluctuation whose probability is a function of time. Suggest some appropriate forms for this function for given mutations, e.g., cancer cells.
7.
Develop a model of a bionic limb. What are desirable characteristics? What information do you need? Use simple dynamic principles for your first model and then produce a more sophisticated model. (Use the information on muscular control.)
8.
You have been asked to advise on stocking a river with fish. What information would you need about the surroundings and about the other animals in the area to help you decide on quantities of fish and the times for stocking? Derive a rough model for this. (Think about the predator–prey models.)
9.
Consider a population in a given country. What determines the size of the population and the number in different age groups? What information would you need for estimating the number of people over 70?
10.
How long would it take to die from starvation? What factors enter into this? [Hint
: consider the weight-control problem.]
References
Bailey, Norman T. J.,
The Elements of Stochastic Process with Applications to the Natural Sciences
, Wiley, New York, 1964.
Brearley, M. N., The long-jump miracle of Mexico city,
Math. Mag.
, Vol. 45, November 1972.
Collins, R. E., Kilpper, R. W., and Jenkins, D. E., A mathematical analysis of mechanical factors in the forced expiration,
Bull. Math. Biophys.
, Vol. 29, 1967, pp. 737–745.
Crowther, R. A., Derosier, D. J., and Klug, A.,
Proc. Roy. Soc. Lond.
A, Vol. 317, 1970, p. 319.
Danziger, L. and Emergreen, G. L., mathematical models of endocrine systems,
Bull. Math. Biophys
., Vol. 19, 1957, pp. 9–18.
Defares, J. G. and Sneddon, I. N.,
An Introduction to the Mathematics of Medicine and Biology
, Year-Book Medical Publishers, Chicago, 1961.
Evans, J. W., Cantor, D. G., and Norman, J. R., the dead space in a compartmental living model,
Bull. Math. Biophys.
Vol. 29, 1967, pp. 711–718.
Keller, Joseph B., Optimal velocity in a race,
Am. Math. Monthly
, May 1974.
Nubar, Y. and Contini, R., A minimal principle in bio-mechanics,
Bull. Math. Biophys.
Vol. 23, 1961, pp. 377–391.
Rashevsky, N.,
Mathematical Biology of Social Behavior
, University of Chicago Press, 1959.
Rashevsky, N., A note on energy expenditure in walking on level ground and uphill,
Bull. Math. Biophys.
, Vol. 24, 1962, pp. 217–227.
Smith, J. Maynard,
Mathematical Ideas in Biology
, Cambridge University Press, New York, 1968.
Smith, Lloyd P.,
How to regulate your weight scientifically
, (published privately, 1980).
Spencer, R. P., A blood volume, heart weight relationship.,
J. of Theoret. Biology
, Vol. 17, 1969, pp. 441–446.
Stacy, R. W., Barth, D. S., and Chilton, A. B., A mathematical analysis of oxygen respiration in man,
Bull. Math. Biophys.
Vol. 16, 1954, pp. 1–14.
Yilmaz, H., Psychophysics and pattern interaction, in Walthen–Dunn, W. (ed.),
Models for the Perception of Speech and Visual Form
, MIT Press, Cambridge, Massachusetts, 1967.
Bibliography
Barnoon, Shlomo, and Harvey Wolfe,
Measuring the Effectiveness of Medical Decisions: An Operation Research Approach
, Charles C. Thomas, Springfield, Illinois, 1972.
Cogan, F. J., R. Z. Norman, J. G. Kemeny, J. L. Snell, and G. L. Thompson,
Modern Mathematical Methods and Models
, Vol. II, Mathematical Association of America, 1958.
Defares, J. G. and I. N. Sneddon,
An Introduction to the Mathematics of Medicine and Biology
, Year Book Medical Publishers, Chicago, 1961.
Landany, S. P. and Machoi, R. E. (eds.),
Optimal Strategies in Sports
(Studies in Management Science and Systems, Vol. 5), Elsevier, Amsterdam, 1977.
Lotka, Alfred J.,
Elements of Mathematical Biology
, Dover, New York, 1956.
Rashevsky, N.,
Mathematical Biophysics
, Vol. I, Dover, New York, 1960.
Rashevsky, N.,
Mathematical Principles in Biology and Their Applications
, Charles C. Thomas, Springfield, Illinois, 1961.