Chapter 4
Optimization
4.1. Introduction
S
OME
of the most widely used models are those of maximization and minimization, simply called optimization. The pursuit of an optimum is a special case of a broader pursuit of an extremum. This may be a property which satisfies the vanishing of the first derivative or Euler's equation in the calculus of variations. More recently, solution of conflict problems—another area of optimization—has become a central preoccupation of game-theorists. As illustrated in the
previous chapter
, stochastic properties may be introduced into the formulation of an optimization problem.
In practicing optimization, one needs to keep in mind the fact that calculus methods apply only to a small class of problems; the majority have to be solved using ingenuity and good logical thinking. In recent years there has been a great surge of effort to expand and unify optimization methods. The unification falls into two areas: functional analysis and Diophantine or integer optimization.
An optimization problem consists of two parts. The first part is that a function (frequently called an objective function) has to be maximized or minimized. For example, the function may be a cost function whose coefficients are unit costs and whose variables are various quantities, such as foods to be purchased. The second part involves a set of constraints given in the form of equations or inequalities; the constraints express limitations on the variables which must be satisfied. An optimization problem need not have constraints.
The objective function may be linear in all its variables. Similarly, the constraints may be all linear. In that case, we have a linear-programming problem. In general the variables are constrained to be nonnegative, but this is not necessary. If any variable anywhere in the problem appears in other than a linear form, the problem becomes a nonlinear problem.
The constraints of an optimization problem define a region known as the feasible region from which the optimum of the objective function is to be found. In a linear-programming problem, the feasible region is convex–polyhedral in shape and the optimum is on the boundary, usually at a vertex of the polyhedron. If the problem is nonlinear, the optimum could be in the interior of the region.
The simplex process is an algorithm for solving a linear-programming problem. There are many algorithms for solving nonlinear programs. Dynamic programming is a successive iteration method of solving an optimization problem given in general functional form. In the calculus of variations, the objective function is usually an integral and the constraints may be algebraic or involve integrals and derivatives. In classical control theory, the constraints are differential equations representing a dynamic system. The objective function to be minimized here may be the time to achieve rendezvous by a rocket whose trajectory is described by the constraints.
We have discussed optimization problems in which there is a single objective function to be maximized or minimized. In many real-life problems in which there is a conflict of interest, game theory provides the rationale for formulating and solving such a problem. Even though game theory is illustrated in this chapter, most of the discussion on conflict resolution is postponed to the
last chapter
.
In addition to the specific references pertaining to the chapter, we have included a relatively comprehensive bibliography on optimization. We are grateful to our good friend, M. Z. Nashed, for his excellent contribution to the list.
4.2. Unconstrained Minima
Sometimes there are no constraints at all on the function to be optimized except perhaps for those implicit in its definition.
Example 1.
Optimal Branching of Arteries [Rosen]
What is the optimal angle at which a branching from an artery takes place in order to reach a given position of the body? The size of the angle is a function of the amount of work (measured by the resistance of the system—the greater the resistance, the greater the work) to be done to push a given volume of fluid through the body (
Fig. 4.1
). Thus, lower resistance means less work by the heart. Resistance varies directly with the total length of the system (the greater the distance, the greater the resistance) and inversely with the radii of the vessels. Thus, too early a branching as the artery leaves the heart, implies a shorter length system, but also implies a decrease in the overall radius of the system since the cross-section is decreased by branching. Too late a branching (e.g., an angle perpendicular to the main vessel) implies a larger radius, but a larger system.The resistance R from the initial point of the main artery
AB
to the destination
C
is given by
R
=
R
1
l
1
+
R
2
l
2
, where
R
i
is the resistance per unit length along the vessel whose length is
l
i
,
i
= 1, 2.
If
l
0
,
θ
and
θ
0
are as shown in
Fig. 4.1
, then
R
=
R
1
l
1
+
R
2
l
2
=
R
1
l
0
(cot
θ
0
– cot
θ
) +
R
2
l
0
cosec
θ
.
If r
1
is the radius of AD
and r
2
is the radius of DC
, then Poiseuille's law for fluid flow in rigid pipes gives R
1
= kr
1
- 4
, R
2
= kr
2
- 4
and the expression for resistance R
becomes:
FIG
. 4.1.
The minimum value of this expression with respect to θ
occurs when
To bring the result closer to experimental observation, we include a factor concerning the minimum amount of work done to maintain the system. This work may be assumed to be proportional to the volume of the system which, in this case, is l
1
πr
1
2
+ l
2
πr
2
2
. If we use K
as the constant of proportionality, the minimum angle is now
As r
2
→ r
1
the two results become the same.
Example 2.
A Location Problem
It is desired to find the location (u
, v
) of an electrical utility plant to supply the needs of users with locations at (x
i
, y
i
)i
= 1, 2,…, N
and minimize the transmission losses. The square of the distances r
i
, i
= 1, 2,…, N
from the location is given by
Let the concentration of users at location i
be n
i
and assume that the total loss function to be minimized takes the form
where g
(r
i
) is the loss associated with the distance r
i
.
The problem now is to find u
and v
to minimize L
. We assume that the function g
(r
) has the form g
(r
) = kr
2
, where k
is a constant. (This is a reasonable approximation in the case of electric cables.) We substitute for g
(r
i
) and r
i
in L
and differentiate with respect to u
and v
. Equating these partial derivatives to zero and solving yields the physical center of gravity of the N
locations:
Consider now what happens if we let
where
s
i
,
i
= 1, 2,…,
N
, is the amount of electricity used by the “average” user at location
i
. If we put
n
i
s
i
=
w
i
and interpret
w
i
as a weight, then

is the sum of products of weights and distances and may be regarded as the potential energy of a physical system. Recall that any physical system adjusts itself so as to minimize its potential energy. The equations resulting from differentiation can be solved as follows.
Form a plan of the area concerned on a wooden table, bore holes at the N
location and place a series of weights proportional to the w
i
on long strings of equal length passing through the corresponding holes. The strings are then tied together on the table. The system of strings and weights is disturbed to allow it to move to a position of equilibrium.
When it comes to rest, the position of the knot will indicate the best location of the utility plant.
*4.3. Constrained Problems
4.3.1. LAGRANGE MULTIPLIERS
One is often faced with the problem of finding the optimum while satisfying a number of conditions or constraints. Consider the following general problem: One wishes to find an extremum of a differentiable function f
(x
1
,…, x
n
) whose variables are subject to the m
constraints
where the g
i
are also differentiable.
Form the Lagrangian function
involving the Lagrange multipliers λ
1
,…, λ
m
. Then the necessary conditions for an unconstrained extremum of F
(namely, the vanishing of the first partial derivatives) are also necessary conditions for a constrained extremum of f
(x
1
,…, x
n
), provided that the matrix of partial derivatives ∂g
i
/∂x
j
has rank m
at the point in question. These necessary conditions are a system of m
+ n
equations
which one can then (at least in theory) solve for the m
+ n
unknowns x
1
,…, x
n
; λ
1
,…, λ
m
. (One is really interested only in obtaining x
1
,…, x
n
.)
Example 1.
Eat Some of Every Food
Let the rate of change of the utility of a quantity x
with respect to x
be inversely proportional to the amount of the quantity already available. Thus if u
(x
) is the utility of x
, we have
i.e., the more we have the smaller is the marginal utility obtained from it.
Consider now the utility function
u
(
x
1
,…,
x
n
), where
x
i
,
i
= 1,2,…,
n
is the amount of food
i
. Suppose
u
(
x
1
,…,
x
n
) =

, where
a
i
is the amount of pleasure obtained from the
i
th food. It is desired to maximize
u
(
x
1
,…,
x
n
) subject to

where
P
i
is the number of calories in a unit of food
i
and
M
is a bound on the total caloric intake.
The solution to this problem (using Lagrange multipliers) is given by

. Thus, to maximize pleasure, one must eat some from each kind of food (assuming that each
a
i
> 0), contrary to certain dietary suggestions that some foods must be cut out completely.
Example 2.
Optimum Dimensions of a Multilevel City
A problem which arises in a three-dimensional (square prism) city planning design is concerned with finding the dimensions which would minimize the maximum possible travel time between two diametrically opposite farthest points, subject to a given floor area of the building. Each floor is assumed to have a square grid of streets. If we assume that vertical elevator speed is α
and horizontal speed is β
, then the problem is to minimize the time to travel the height of the city h
and twice the side k
of its square base. This leads to the expression for the time T
,
Since the total internal floor area A
is given, we have
where n
is the number of levels above the ground. Assuming that c
is the given height of each floor, then
The optimization problem then is to minimize T
subject to the constraint
Using the Lagrange multiplier method, we obtain the following:
*Example 3.
A Walk in the Rain [Deakin]
A man wishes to walk (without an umbrella) from A
to B
(vector direction i
) while it is raining (unit downward direction –k
and terminal rain speed V
t
). Let j
= i
× k
(cross-product orthogonal to the i
, k
plane). There is a horizontal wind of velocity V
t
(w
i
+ W
j
). Thus, the rain velocity is V
= V
t
(w
i
+ W
j
– k
). The man's speed is u
– xV
t
. Hence, the velocity of the rain with respect to the man is V
t
[(w
– x
)i
+ W
j
– k
]. Consider the man as a rectangular prism with three sides (top with area αA
, right or left with area βA
and front or back with area A
) getting wet. What should his optimal speed be? The amount of wetness per unit time is proportional to the cosine of the angle between the normal to the side the
rain hits and the relative rain direction multiplied by the area of the side and is inversely proportional to his speed. Since the normals are the unit vectors i, j, k
, the total amount of rain falling on the man is proportional to (using the cosines of the angles between the direction line of the rain and the top, right front, or back, respectively and using the speed xV
t
):
It is desired to minimize the “wetness function” F
(x
). The absolute values arise from the fact that just two of the four vertical sides can be wet depending on the signs of W
and (x
– w
).
The optimal policy for walking in the rain depends on whether α
+ β
|W
| > w
or not. In the former case F
(x
) is monotone decreasing for all x
and the optimal strategy is to run as fast as possible. Otherwise, F
(x
) has a minimum at x
– w
and the man should travel at the speed x
= w
if he can to avoid wetness on front or back. He will get wet only on the top and side. If he cannot attain w
, he should run as fast as he can. We assume that α
= 0.06, β
= 0.33, V
t
= 20 mph. Let X
be his top speed, and let α
+ βW
< w
. Then he should run as fast as w
or x
. This question arises only if X
> w
. If we measure relative wetness by R
= [F
(x
)]/[F
(w
)], R
is maximized when α
+ β
|W
| is minimized at α
; i.e., when W
= 0. This leads to
where x
= 1. This means he gets more than four times wetter than if he jogged with the same speed as the wind. Note that this refers only to the rate
of wetness. The total wetness will also depend on time.
*4.3.2. UNKNOWN NUMBER OF VARIABLES
Most optimization problems deal with a fixed number of variables. The following is an example with the number of variables to be determined together with their optimal values.
Example.
The Jeep Problem (optimization with constraints)
We shall illustrate a problem in which it is desired to minimize a function subject to constraints given as recursive relations.
Problem.
It is desired to advance a vehicle 1000 miles from an original position with a minimum expenditure of fuel. The vehicle has maximum fuel capacity of 500 units which it uses at the uniform rate of one unit per mile. The vehicle must stock its own storage points from its one tank by making round trips between storage points. Determine the storage points along the route which minimize the fuel consumption for the entire trip. Determine the number of round trips required between each pair of storage points. Determine the minimum amount of fuel required at the start.
If s
i
is the amount of fuel stored at the i
th storage point (i
= 0, 1,…, n
), d
i
is the distance between the (i
– 1)st and the i
th storage points, and k
i
is the number of round trips the vehicle makes between these two points, then
Thus, the amount of fuel stored at the (i
– 1)st point is equal to the amount stored at the i
th point plus the amount consumed along the route in making k
i
round trips and a single forward trip.
It is not difficult to see that a minimum use of fuel is made if for each trip the vehicle proceeds loaded at full capacity. For, in that case, a smaller number of trips would be made and, hence, less fuel is consumed in travel. Also, in order to have no fuel left at the end, it is necessary that the vehicle be loaded with 500 units of fuel at the 500-mile point. From these two facts, it follows that the ith storage point should be located so that the vehicle makes k
i
+ 1
round trips between this point and the (i
+ 1)st storage point and one last forward trip, always fully loaded and ultimately leaving no fuel behind at the i
th storage point. Working backward, the last statement is valid back to the first storage point but is not possible for the starting point since the position of that point is predetermined. Hence, the vehicle will make its last forward trip between the starting point and the first storage point with a load c
< 500. Thus, we have:
It is desired to minimize s
0
given by
Now, using for s
1
the value previously given, one has
Since the vehicle can travel the last 500 miles without need for stored fuel, in order to minimize s
0
, it suffices to put s
n
= 500 and to place the storage points along the first 500 miles of the route. Thus,
It turns out that k
i
= 8 – i
, d
i
= 500/(17 – 2i
), i
= 1,…, 7 and s
0
= 3836.45.
4.4. Linear Programming
One of the most important special cases of optimization deals with linear objective functions subject to linear inequality constraints. This problem may be stated mathematically as follows: find values (x
1
…x
n
) which maximize the linear form
subject to the conditions
and
where the a
ij
, b
i
, c
j
are constants.
In matrix notation the linear-programming problem may be stated as follows:
Find x
to maximize c′x
subject to x
≥ 0 and Ax
≤ b
, where, if
is its transpose, and
Note that maximizing c
′x
is equivalent to minimizing – c
′x
, and vice versa.
The following two examples illustrate problems which can be solved using linear-programming methods.
Example 1.
The Diet Problem
Suppose that a thrifty patient of limited means was advised by his doctor to increase the consumption of liver and frankfurters in his diet. In each meal he must get no less than 200 calories from this combination and no more than 14 units of fat. When he consulted his diet book, he found the following information: there are 150 calories in a pound of liver and 200 calories in a pound of frankfurters. However, there are 14 units of fat in a pound of liver and four units in a pound of frankfurters.
So, he reasoned as follows: I must eat an amount x
1
of liver and an amount x
2
of frankfurters, paying the least amount of money (the price is 30 cents per pound of liver and 50 cents per pound of frankfurters) and meeting the requirements set by the doctor. There are 150x
1
calories in x
1
pounds of liver and 200x
2
in a pound of frankfurters; the total amount of calories obtained, that is, 150x
1
+ 200x
2
, must be no less than 200, and so one writes
Similarly, the total fat should be no greater than 14 units:
and the total cost 30x
1
+ 50x
2
must be a minimum. He is faced with minimizing the total cost subject to the medical constraints (hence the term constrained optimization
). He must eat something but just enough to meet the requirements and at minimum cost.
This problem can be solved in several ways. For a more complicated problem with many more dietary items which the patient must eat and with further restrictions such as the amount of vitamins obtained, one uses the well-known simplex process for its solution. For the simple problem given above, one can draw a picture (
Fig. 4.2
) and find the desired solution; i.e., the food quantities
x
1
and
x
2
to satisfy the constraining inequalities and yield the minimum cost.
To plot the inequalities, one simply plots a straight line, using only the equality sign (as in
Fig. 4.2
).
FIG
. 4.2.
Then the first inequality is satisfied by all the points to the right of the line and the second inequality by the points to the left of the corresponding line. Thus the points common to both inequalities lie in the shaded region. The cost expression is given an arbitrary value at first. Note, however, that it may be moved back and forth in a parallel direction. The minimum is obviously given when the line is moved as close to the origin as possible, while still intersecting the shaded region. This obviously occurs at
x
1
=

,
y
1
=

, as indicated by the arrow. Consequently, this value, which is roughly one pound of liver and a third of a pound of frankfurters, satisfies the requirements at minimum cost.
Example 2.
The Caterer Problem [Jacobs]
A caterer knows that, in connection with the meals he has arranged to serve during the next n
days, he will need r
j
≥ 0 fresh napkins on the j
th day, j
= 1, 2,…, n
. Laundering normally takes p
days; i.e., a soiled napkin sent for laundering immediately after use on the j
th day is returned in time to be used again on the (j
+ p
)th day. However, the laundry also has a higher-cost service which returns the napkins in q
< p
days (p
and q
are integers).
If the caterer has no clean napkins on hand, he will meet his immediate needs by purchasing napkins at a
cents each. Laundering costs b
and c
cents per napkin for the normal and high-cost service respectively. How does he arrange matters to meet his needs and minimize his outlay for the n
days?
Let x
j
represent the napkins purchased for use on the j
th day; the remaining requirements, if any, are supplied by laundered napkins. Of the r
j
napkins which have been used on that day plus any other soiled napkins on hand, let y
j
be the number sent for laundering under normal service and z
j
the number under the rapid service. Finally, since soiled napkins need not be sent to the laundry immediately after use, let s
j
be the stock of soiled napkins on hand after y
j
+ z
j
have been shipped to the laundry; they will be available for laundering on the next day, together with those from the r
j
+1
used on that day.
Consequently,
That is, the napkins used on the j
th day are equal to the number sent to the laundry plus the change in the stock of soiled napkins.
The returns from the laundry are equal to the amounts shipped for normal service p
days earlier plus the amounts shipped for rapid service q
days earlier. Together with purchases, these provide for the needs on the same day. This gives
The total cost to be minimized, subject to these constraints, is
where
4.5. Nonlinear Programming
If we remove the hypotheses of linearity from a linear programming problem, we get a nonlinear programming problem.
The general programming problem has the form
subject to the constraints
Minimizing f
is equivalent to maximizing – f
.
Examples of nonlinear programming problems abound in the literature. The following are among the better known problems.
Example 1.
Maximizing Revenue
A monopolist wishes to maximize his revenue. The sale price
p
j
of a commodity depends on the quantity of each of several commodities which he produces (unlike the case of free competition). Thus, if the revenue is given by
f
(
x
) =

, where
p
j
is the price of the
j
th commodity and
x
j
is the amount of the
j
th commodity to be produced, then
If we substitute these quantities in f
, we obtain a quadratic function to be maximized subject to nonnegativity constraints x
j
≥ 0, j
= 1…n
, and subject to capacity constraints
which may be prescribed in a linear form. Thus, since f
(x
) is nonlinear, we have a nonlinear program.
A similar problem may be formulated in which the cost of producing a commodity depends on the quantity produced.
Example 2.
Commodity Production
In this case the objective is to find the values of x
j
which minimize the quadratic function f
(x
) subject to x
j
≥ 0 and subject to the constraints that the entire capacity should be used. These constraints can be expressed in terms of linear inequalities with nonnegative coefficients.
Example 3.
Portfolio Selection
Yet another example is that of portfolio selection in which an investor wishes to allocate a given sum of money
C
to different securities whose total return is considered a random variable. He may either maximize the expected return for a fixed variance or minimize the variance for a fixed return. If
x
j
,
j
, = 1…
n
, is the amount to be invested on the
j
th security, then

. The total return is

, its expected value is
E
=

, and its variance is

, where (
a
ij
) is the covariance matrix of the random variables
r
j
, which represent the return on the
j
th security, and
μ
j
is the expected value of
n
j
. Both (
a
ij
) and
μ
j
are given. Here the problem is either to maximize

subject to

, where
σ
is a given constant, and subject to

,
x
i
≥ 0, or to minimize

subject to

, where
μ
is given, and subject to

,
x
j
≥ 0. These problems are obviously nonlinear, in the first case because of the presence of a quadratic constraint, and in the second because this constraint is now the function to be minimized.
Example 4.
Chemical Equilibrium
Next, we consider an example in chemical equilibrium. We are told that m
gaseous elements are used to form n
compounds under constant pressure P
, where there are a
ij
atoms of the i
th element in one molecule of the j
th compound. If there were b
i
moles of the i
th element used and there are now x
j
moles of the j
th compound in the mixture, then, by conservation of mass,
with
with n
≥ m
.
Subject to these constraints it is desired to minimize the total Gibbs free energy of the system:
where c
j
= F
j
/RT
+ log P
, F
j
is the Gibbs energy per mole of j
th gas at temperature T
and unit atmospheric pressure, and R
is the universal gas constant.
A linear-programming problem in which the solution must be integral is called an integer-programming problem. Sometimes the solution must be integral even for a nonlinear problem.
Example 5.
Nonlinear Programming in Integers
Given a system of
m
stages which are connected in series; the
i
th stage consists of
n
i
components connected in parallel as indicated in
Fig. 4.3
.
FIG
. 4.3.
The probability that an i
th-stage component will function is p
i
; the cost of the component is c
i
. The system can only function if at least one component in each stage is operable. We are concerned with the maximum reliability of the system and the minimum cost of the system.
Since the probability that a component of the i
th stage is operable is p
i
, 1 – p
i
is the probability that it is inoperable, (1 – p
i
)n
i
is the probability that none of the components of the i
th stage is operable, and 1 – (1 – p
i
)n
i
is the probability that at least one component of the i
th stage is operable. Since the stages are in series we take the product of these probabilities to obtain the probability P
(n
1
…n
m
) that the system is operable. Thus
The cost of the system is given by
C
=

. The problem may be either to maximize
P
with respect to integral values of
n
i
for a given value of
C
or to minimize
C
for a given value of
P
.
4.6. Multistage Optimization (Dynamic Programming)
The dynamic programming technique enables one to handle a multistage decision process as a series of single problems.
The process may be defined as a transformation, made up of a series of independent stages, each of which is characterized by a set of parameters called “state variables.” Dynamic-programming problems often use stages to represent intervals of time or space. Each time a new stage is reached, a choice among possible decisions must be made based on the values of the state variables at that stage. This choice will optimize the function over all stages already considered.
At each stage, the principle of optimality is assumed to hold; an optimal set of decisions has the property that whatever the first decision is, the remaining decisions must be optimal with respect to the outcome which results from the first decision. As a result, the transformation preserves the optimality of the original objective function.
We give a general example which shows how the method may be used.
Example.
Buying and Selling
A merchant wants to plan his purchases and sales for a year. Let the prices for purchasing and selling in month
i
be
p
i
and
c
i
, respectively, after numbering the months backwards (i.e., Jan. = 12, Feb. = 11,…). The goods are received at the beginning of each month and sold by the end of that month (
Fig. 4.4
). The storage capacity available is
S
. Find the optimal plan for the merchant.
FIG
. 4.4.
Let x
i
be the amount of goods already at the store at the beginning of month i
, d
i
be the amount bought at the beginning of month i
, e
i
be the amount sold at the end of month i
, f
i
(x
i
) optimal profit feasible from month 1 to month i
, as a function of x
i
.
We have
Solving these equations recursively and computing d
i
, e
i
for i
= 1, 2,…, 12 makes it possible to evaluate the maximal profit f
12
(x
12
).
*4.7. Calculus of Variations
The calculus of variations is concerned with looking for a function which optimizes an integral subject to constraints generally given in algebraic form or involving integrals.
A relatively general problem in the calculus of variations is the problem of Bolza formulated by Bliss.
In a class of arcs
find one which minimizes
subject to differential-equation constraints
and to the end conditions
(these conditions may separate into those exclusively on x
1
and those on x
2
), where
Note that the end points x
1
and x
2
may vary as functions of a parameter vector
which in turn implies that
and J
becomes
Two problems due to Mayer and Lagrange with variable end points were defined, the first requiring F
≡ 0 and the second requiring f
≡ 0, respectively.
A variational problem may involve retarded arguments such as in
subject to the conditions
and x
(t
1
) = x
1
; τ
is a positive constant of retardation.
An auxiliary condition as constraint may be included such as
where C
is a given constant.
A very general variational problem can be formulated in the form of functional expressions such as:
subject to differential, integral, difference, and a mixture of equality and inequality constraints. Two very practical examples follow.
Example 1.
A Search Problem [Koopman]
The following variational example involves inequality constraints but is solved by elementary methods. Assume that p
(x
, y
) dx
dy
is the probability of the existence of a target in an area element dx dy
of a region R
and that e
-
ɸ
(
x
,
y
)
is the probability of not detecting it at a point in that region, where ɸ
≥ 0 and
is the total effort available for search in the region. Find
subject to the above constraints. F
(ɸ
) is, of course, the probability of not detecting the target in R
, if it is there. Note that this problem has a solution if p
(x
, y
) and ɸ
(x
, y
) are integrable.
Example 2.
Production Scheduling [Modigliani and Holm]
Find the most profitable schedule of production of a commodity over a given period of time (0, T
) so as to meet the following requirements:
-
The initial inventory h
0
is given.
-
The sales schedule is given, S
(t
) being the cumulative sales from 0 to t
(dS
/dt
need not be continuous).
-
Inventory can never be negative.
-
The terminal inventory is 0.
-
The cost of production is given: let f
(x
) be the cost, per unit time, of producing x
units of product per unit time; f
(x
)/x
is then the average cost, and f
′(x
) is the marginal cost which is increasing.
-
The cost of storage is α
per unit product and per unit time.
We define X
(t
) = h
0
+ cumulative output up to time t
. This ensures continuity for X
(t
)
but not for X
′(t
). No restrictions on X
(t
) other than the above are imposed. The assumptions are that the sales function S
(t
) is any piecewise continuous function and the marginal cost function is monotone increasing and differentiable. The problem then becomes:
-
X
(0) = h
0
, the initial inventory.
-
X
(T
) = S
(T
), in order to meet demand.
-
X
(t
) ≥ S
(t
) for 0 ≤ t
≤ T
since the inventory cannot be negative.
-
X
(t
) is continuous and nondecreasing and has a piecewise continuous derivative.
The cost
where the first term of the integrand is storage cost per unit at time t
and the second term is the production cost per unit time at time t
.
Determine X
(t
) which minimizes the total cost C
subject to conditions 1 to 4.
4.8. Theory of Optimal Control
A major problem of control theory is to find the “control vector” y
(t
) which minimizes a scalar function
subject to the vector differential equation
to local constraints
to global constraints
and to terminal conditions of the form
The minimization may be alternatively applied to an expression in which
T
depends on the history of the process, i.e.,
T
=
T
(
x
,
y
). A number of applications of control theory have been extended to other areas. For example, in the field of arms control one may take Richardson's equations (see
page 142
for details) with control
u
i
(
t
)
i
= 1, 2,
as constraints and find u
1
and u
2
which minimize the cost function, as armament given by
where the coefficients a
i
, b
i
are given weights.
An example in economics follows.
Example 1.
Saving and Investment
Let S
(t
) be saving accumulated to time t
, I
(f
) the income, C
(t
) the consumption, R
(t
) the investment return all at time t
(R
= aS
where a
is the compound interest rate per annum); then
and if S
(0) = S
0
, then for any t
in the interval (0, T
), we have
The problem is to find C
(t
) which maximizes some function
subject to the above equation with t
= T
as constraint with the assumption that I
(f
), S
(T
), and S
(0) are known. A suitable choice of function f
would be
Example 2.
An Advertising Model [Vidale and Wolfe]
A firm produces a single product and sells it in a market which can absorb no more than μ
dollars of the product per time unit.
It is assumed that if the firm does not advertise at a certain time, its rate of sales will decrease in proportion to the sales at that time. If, on the other hand, the firm advertises, the rate of sales will increase proportional to the level of advertising, but this increase affects only that share of the market which is not already purchasing the product.
The problem is to find the optimal advertising policy, which maximizes total sales in a given time interval. Let S
(t
) be the amount of sales at time t
in dollars, A
(t
) be the amount of advertising at time t
in dollars, and λ
, γ
be positive constants.
The problem thus may be formulated as follows. Find
subject to
for some given constant Ā
.
Defining a new variable
we obtain the following:
subject to:
The optimal policy turns out to be what is known as a “bang-bang” policy (i.e., advertising takes place in spurts, not continuously). The problem may be solved in its first formulation by means of the calculus of variations (Lagrange's problem), but this becomes a lengthy method. Euler's equation gives a nonlinear integro-differential equation, which can be solved only by numerical techniques.
4.9. Stochastic Optimization
A number of optimization problems involve stochastic elements and require a special treatment. The stock-piling example treated above is an illustration. A typical problem may involve the maximization of a decision function based on the variables of another optimization problem. Thus, one seeks to maximize a function
where the coefficients c
k
, k
= 1…r
are independently distributed random variables according to u
k
(c
k
), k
= 1…r
subject to constraints
where again the a
's are independently distributed according to ν
s
(a
s
), s
= 1…q
. Using Lagrange multipliers, a solution may be obtained of the form
The next stage is then to consider a decision function p
(x
1
…x
n
). The procedure involves taking the expectation E
(p
) of p
with respect to the x
j
which are now stochastic variables and maximizing E
(p
) with respect to parameters c
1
…c
r
; a
1
…a
q
through differentiation. The resulting values of the parameters are the desired estimates. Second derivatives are also used for sufficiency tests.
One of the famous problems of operations research is that of the newspaper boy. We give it here in a slightly different form.
Example 1.
The Publisher
A publisher sells a book at a profit of $2.00 a copy and loses $5.00 for each unsold copy. Assuming that the demand is unknown but can be estimated from previous data as being
given by the probability density f
(y
), with a maximum sale of y
0
copies, it is desired to calculate the number of books to be printed in order to maximize the expected profit.
If the publisher prints x
copies when the demand is for y
copies, then the profit is given by
The expected profit is given by
Note that this gives
Then E
(p
) is maximized by equating to zero the derivative with respect to x
.
Thus
Thus x
must be chosen so that the integral has the value 2/7.
*Example 2.
Stochastic Linear Programming: Statement of the Problem
If in a linear-programming problem the constraints must be satisfied with some specified probability α
i
(i
= 1, 2,…, m
), the problem becomes: minimize cx
subject to
Assume that c
and the a
ij
are constraints and that the b
i
are random variables. Then the i
th constraint may be rewritten as
where σ
is the standard deviation of b
i
.
If we further assume that b
i
is normally distributed and let
we have
where K
α
i
is determined from
This is equivalent to
Thus
is true if and only if the following holds:
or
The original problem may now be restated as: minimize cx
subject to Ax
≥ E
(b
) + K
α
i
σ
b
i
x
≥ 0 which is a deterministic problem, if E
(b
) and σ
b
are known.
4.10. Game Theory [Luce and Raiffa]
Game theory gives us a method for optimizing the solution of multiple interest problems in which each party pursues its objective or interest while being aware that the others are pursuing these same objectives. The problem is to find a solution which satisfies everyone. (For more on game theory see
Chapter 7
).
Example.
Examples of Utility Computation
(a) In a coin-matching game each player shows one side of his coin, Player I wins one unit if each shows the same side and loses a unit if they do not. The essential point here is the symmetry of the game. We may describe the game by a matrix A
where a
ij
gives the gain to Player I if he follows strategy i
while Player II follows strategy j
.
(b) We may consider a simple example. Two airplanes belonging to Player I wish to destroy a target of Player II to which there are three routes; Player II has three guns that he
can place on any of the routes. If an airplane travels over a route on which there is a gun it is put out of action. If two airplanes go over only one gun, only one aircraft is destroyed. Player I wishes to destroy the target and Player II to defend it. Player I has two strategies: (1) he may send each airplane by a different route, or (2) he may send both on the same route. Player II may (1) assign each gun to a different route, (2) assign two guns to one route and one gun to another route, (3) assign all three guns to the same route. We set the payoff for destroying the target at unity, and we let a
ij
, i
= 1,2; j
= 1,2,3 denote the payoff to I for playing strategy i
while II plays strategy j
.
We can see that a
11
= 0 and a
21
= 1. We may also show that a
12
= 2/3. Here the guns are allocated to the routes in a series of 0, 1, 2 patterns.
The planes may be assigned to these patterns as follows: (0, 1, 1), (1, 0, 1), and (1, 1, 0), and two of these possibilities use an undefended route. Thus the probability of getting through is 2/3. We also have a
22
= 2/3, since the probability that at least one plane gets through is equal to that of choosing a route not defended by two guns; similarly a
23
= 2/3. We also have a
13
= 1. This gives the payoff matrix
In general, it is not easy to write down the payoffs, whether given in cardinal or ordinal form. For comparison of weapons systems designed for different missions, the payoff may be computed by means of simulation in which a weapon system is theoretically tested against the different missions for various measures of effectiveness of destruction, deterrence, reliability, cost, etc. The reader can imagine the added difficulty when the utility scales of the players are different.
The previous examples illustrated zero-sum games in which the gain of Player I has the same magnitude as the loss of Player II. A game is nonzero-sum if the payoffs to the players for a particular choice of strategies are not equal and opposite.
In this chapter we have dealt with a variety of methods for optimization, one of the most common problems we face in our personal affairs as well as in our business and professional life.
Chapter 4—Problems
1
. A gardener estimates that if he pulls all his carrots now he will have 300 boxes worth $5 per box. If he waits, the size of the carrots will increase so as to give him an additional 50 boxes per week, but the price will drop by 50 cents per box per week. When should he dig them? [After two weeks.]
2
. The manager of a real estate firm faces the problem of deciding the monthly rental he should charge for each of the 60 newly built apartment units. His past experience tells him that at $80 rental per month, all apartment units will be occupied but, for each $2.00 increase in rent, one apartment unit is likely to remain vacant. Since service and maintenance costs are higher for occupied than for vacant apartments, he realizes that he can increase his profit margin by raising the monthly rental from $80 even though this means that some of the apartments will remain unoccupied. How much rental should he charge to collect the maximum total rent? [$100]
Now expand your model to take care of maintenance costs. Let A
be the maintenance costs for unoccupied apartment and B
the maintenance cost for occupied apartment.
3
. Find the dimensions of the largest box (sides parallel to the coordinate planes) which may be inscribed in the ellipsoid
4
. We wish to satisfy the scheduling requirements of an airline using the smallest number of airplanes. Describe a simple schedule involving nine flights among three cities and formulate a model, (i) using network flow methods, (ii) using linear-programming methods.
5
. A rug company is under contract to manufacture a rectangular rug to cover the maximum floor area of an ellipse-shaped hall. The dimensions of the hall will not be known before the completion of the rug, but the probability densities f
(a
)and g
(b
) of the axes a
, b
of the hall are known. The rug company will be paid for the area of the rug that fits the hall with sides parallel to the axes. If the rug is smaller than required, the company will lose in profits. On the other hand, if the rug is larger, it will need cutting to the required dimensions and so will incur a loss due to the unused portion. What size should the rug be?
6
. Given n
targets distributed over a wide terrain. Each target has any one of three possible values, 1, 2, 3; the greater number corresponds to the higher value. An airplane with fixed gasoline capacity and a maximum payload of 8000 lb wishes to bomb targets on a single mission and return to its base. There are three types of bombs: a 500-lb bomb, which can only destroy targets of value 1; a 1000-lb bomb, which can destroy targets of value 1 or 2; and a 2000-lb bomb, which can destroy targets of value 1,2, or 3. The plane can carry no more than 12 bombs. The fuel consumption is proportional to the weight of the airplane which depends on the weight of the unexpended bomb load and on the remaining fuel. It is assumed that a bombed target is completely destroyed. The optimal selection of targets determines the order in which bombs are loaded and later dropped. What type of bombs and what order of loading would produce the maximum total value of targets destroyed?
7
. Formulate some situations which you have faced in the last month as a two-person game? Were your actions rational in the light of the insights derived from the game. If not, what went wrong?
References
Deakin, Michael A. B., Walking in the rain,
Math Mag.
, Vol. 45, November, 1972.
Jacobs, W. W., The caterer problem,
Naval Res. Logistics Quart.
, Vol. 1, 1954, pp. 154–165.
Koopman, B. O., The theory of search,
Operations Res.
, Vol. 4, No. 3, and Vol. 4, No. 5, 1956.
Luce, R. D. and H. Raiffa,
Games and Decisions
, Wiley, New York, 1957.
Modigliani, F. and F. Holm, Production planning, overtime and the nature of the expectation and planning horizon,
Econometrica
, Vol. 23, 1955, p. 46.
Nash, J., Non-cooperative games,
Ann. Math.
, Vol. 54, No. 2, 1951.
Rosen, Robert,
Optimization Principles in Biology
, Plenum Press, New York, 1967.
Saaty, T. L.,
Mathematical Models of Arms Control and Disarmament
, Wiley, New York, 1968.
Saaty, T. L.,
Mathematical Methods of Operations Research
, McGraw-Hill, New York, 1959.
Saaty, T. L.,
Optimization in Integers and Related Extremal Problems
, McGraw-Hill, New York, 1970.
Saaty, T. L. and J. Bram,
Nonlinear Mathematics
, McGraw-Hill, New York, 1964.
Vidale, M. L. and H. B. Wolfe, An O. R. study of sales response to advertising,
Operations Res.
, Vol. 5, No. 3, June 1957, pp. 370–381.
Bibliography
General
Beveridge, Gordon S. G. and Robert S. Schechter;
Optimization
:
Theory and Practice
, McGraw-Hill, New York, 1970.
Cooper, Leon,
et al., Introduction to Operations Research Models
, W. B. Saunders Co., Philadelphia, 1977.
Cooper, Leon and David Steinberg,
Introduction to Methods of Optimization
, W. B. Saunders Co., Philadelphia, 1970.
Dantzig, G. B. and B. C. Earnes (eds.),
Studies in Optimization
, (MAA Studies, No. 10), Mathematical Association of America, 1977.
Gillett, Billy E.,
Methods of Operations Research
, McGraw-Hill, New York, 1976.
Gupta, Shiv K. and John M. Cozzolino,
Fundamentals of Operations Research for Management
, Holden-Day, San Francisco, California, 1975.
Hancock, Harris,
Theory of Maxima and Minima
, Dover Publications, New York, 1960.
Pierre, Donald A.,
Optimization Theory with Applications
, Wiley, New York, 1969.
Saaty, Thomas L.,
Mathematical Methods of Operations Research
, McGraw-Hill, New York, 1959.
Shelly, Maynard W. II and Glenn L. Bryan,
Human Judgements and Optimality
, Wiley, New York, 1964.
Wilde, Douglass J. and Charles S. Brightler,
Foundations of Optimization
, Prentice-Hall, Englewood Cliffs, New Jersey, 1967.
Geometric Approach (Convexity)
Bonnesen and Fenchel,
Theorie der Konvexen Korper
, Chelsea Publishing Co., New York, 1948.
Hadwiger, H.,
Altes and Neues uber Konvexe Korper
, Basel, Birkhauser, 1955.
Valentine, Frederick A.,
Convex Sets
, McGraw-Hill, New York, 1964.
Yaglom and Boltyanskii,
Convex Figures
(translated by P. J. Kelly and L. F. Walton), Holt, Rinehard & Winston, New York, 1961.
Geometric Number Theory
Minkowski, Hermann,
Geometrie Der Zahlen
, Chelsea Publishing Co., New York, 1953.
Rogers, C. S.,
Packing and Covering
, Cambridge University Press, London, 1964.
Féjés-Toth, L.,
Lagerungen In Der Ebene Auf Der Kugel Und Im Raum
, Springer-Verlag, Berlin–Gottingen–Heidelberg, 1953.
Geodesics, Differential Geometry and Manifolds
Flanders, Harley,
Differential Forms, with Applications to the Physical Sciences
, Academic Press, New York, 1963.
Graustein, William C.,
Differential Geometry
, Macmillan, New York, 1949.
Network Flow
Berge, Claude,
The Theory of Graphs
, Methuen, London; Wiley, New York, 1962.
Berge, C. and A. Ghouila-Houri,
Programmes, Jeux Et Reseaux De Transport
, Dunod, Paris, 1962.
Busacker, Robert G. and Thomas L. Saaty,
Finite Graphs and Networks: An Introduction with Applications
, McGraw-Hill, New York, 1965.
Ford, L. R., Jr., and D. R. Fulkerson,
Flow in Networks
, Princeton University Press, Princeton, New Jersey, 1962.
Equality and Inequality Constraints
Beckenbach, Edwin and Richard Bellman,
Inequalities
, Springer-Verlag, Berlin, 1961.
Hardy, C. H., J. E. Littlewood, and G. Polya,
Inequalities
, Cambridge University Press, 1934.
Kazarinoff, N. D.,
Analytic Inequalities
, Holt, Rinehart & Winston, New York, 1961.
Kuhn, H. W. and A. W. Tucker,
Linear Inequalities and Related Systems
, No. 38, Princeton University Press, Princeton, New Jersey, 1956.
Saaty, Thomas L.,
Modern Nonlinear Equations
, McGraw-Hill, New York, 1964.
References Containing a Short Account of the Calculus of Variations (Usually with Applications)
Bliss, G. A.,
A Monograph on the Calculus of Variations
, Open Court Publishing Co., Chicago, 1935. MAA Carus monograph.
Byerly, W. E.,
Introduction to the Calculus of Variations
, Ginn & Co., Boston, Massachusetts, 1917.
Detlman, J. W.,
Mathematical Methods in Physics and Engineering
(ch. 2), McGraw-Hill, New York, 1962.
Hildebrand, F. B.,
Methods of Applied Mathematics
(ch. 2), Prentice-Hall, Englewood Cliffs, New Jersey, 1952.
Hestenes, M. R., Elements of the calculus of variations, pp. 59–91 in
Modern Mathematics for the Engineer
, Vol. I. E. F. Bekenback (ed.), McGraw-Hill, New York, 1956 (reprinted in McGraw-Hill paperbacks).
Sagan, H.,
Boundary and Eigenvalue Problems in Mathematical Physics
, Wiley, New York, 1951.
References with Emphasis on the Applications of the Calculus of Variations to Problems in Geometry, Mechanics, Partial Differential Equations and Mathematical Physics
Courant, R.,
Dirichlet Principle. Conformal Mapping and Minimal Surfaces
, Interscience, New York, 1950.
Goldstein, Herbert,
Classical Mechanics
, Addison-Wesley, Reading, Massachusetts, 1950.
Lanczos, C.,
The Variational Principles of Mechanics
, University of Toronto Press, Toronto, 1949.
Applications to Dynamic Programming, the Maximum Principle of Pontryagin, Control and Optimization Theory and Related Numerical Algorithms
Athan, Michael and Peter Falb,
Optimal Control
, McGraw-Hill, New York, 1966.
Balakrishnan, A. V. and L. W. Neustadt (eds.),
Computing Methods in Optimization Problems
, Academic Press, New York, 1964.
Bellman, R.,
Dynamic Programming
, Princeton University Press, Princeton, New Jersey, 1961.
Bellman, R.,
Adaptive Control Processes: A Guided Tour
, Princeton University Press, Princeton, New Jersey, 1961.
Bellman, R. and Dreyfus, S. E.,
Applied Dynamic Programming
, Princeton University Press, Princeton, New Jersey, 1962.
Bellman, R. E. (ed.),
Symposium on Mathematical Optimization Techniques
, the RAND Corp., 1963.
Bellman, R. E. and Kalaba, R. (eds.),
Selected Papers on Mathematical Trends in Control Theory
, Dover, New York, 1964.
Bellman, R. E., L. Glicksberg, and O. A. Gross,
Some Aspects of the Mathematical Theory of Control Processes
, the RAND Corp.
Feldbaum, A. A.,
Optimal Control Systems
, Academic Press, New York, 1966.
Advances in Control Systems
, a series edited by C. T. Leondes, Academic Press, New York, 1964.
Leitmann, G.,
Optimization of Techniques with Applications to Aerospace Systems
, Academic Press, New York, 1962.
Merriam, C. W., III.,
Optimization Theory and the Design of Feedback Control Systems
, McGraw-Hill, New York, 1964.
Pontryagin, L. S.,
The Mathematical Theory of Optimal Processes
, (translated from the Russian), Interscience, New York, 1962.
Saaty, T. L. and J. Bram,
Nonlinear Mathematics
, McGraw-Hill, New York, 1964.
Tou, J. L.,
Modern Control Theory
, McGraw-Hill, New York, 1964.
Wilde, D. J.,
Optimum Seeking Methods
, Prentice-Hall, Englewood Cliffs, New Jersey, 1964.
Morse Theory and the Calculus of Variations in the Large
Morse, M.,
Calculus of Variations in the Large
, American Math. Society Colloquium Publications, Vol. 18, Providence, Rhode Island, 1934.
Liusternik, L. A.,
Topologies of Functional Spaces and Calculus of Variations in the Large
, Moscow, 1947.
Seifert, H. and W. Threlfall,
Variations rechung in Grossen
(Theorie von Marston Morse), Chelsea Publication Co., New York, 1951.
Functional Analysis and Modern Variational Theory
Liusternik, L. and Sobolev, V.,
Elements of Functional Analysis
(ch. 6) (translated from the Russian), Ungar Publishing Co., New York, 1961.
Mikhlin, S. G.,
The Problem of the Minimum of a Quadratic Functional
(translated from the Russian by A. Feinstein), Holden-Day, San Francisco, California, 1965.
Mikhlin, S. G.,
Variational Methods in Mathematical Physics
(translated by L. I. Chambers), Pergamon Press, Oxford, 1964.
Vainberg, M. M.,
Variational Methods for the Study of Nonlinear Operators
(translated by Amiel Feinstein), Holden-Day, San Francisco, California, 1964.
Kantorovich, L. V., and Akilov, G. P.,
Functional Analysis in Normed Spaces
, Moscow, 1959. English translation edited by A. P. Robertson, Pergamon Press, Oxford, 1964.
Expository papers and Symposia
Bliss, G. A., Some recent developments in the calculus of variations,
Bull. Am. Math. Soc.
, Vol. 26, 1920, pp. 343–361.
Dresden, A., Some recent work in the calculus of variations,
Bull. Am. Math. Soc.
Vol. 30, 1926, pp. 475–521.
University of Chicago Department of Mathematics,
Contributions to the Calculus of Variations
, University of Chicago Press, Chicago, 1930–1932.
Some References to More Recent Research on the Calculus of Variations and Its Application
Proceedings of the Eighth Symposium in Applied Mathematics of The American Math Society
, Vol. VIII,
Calculs of Variations and Its Applications
, McGraw-Hill, New York, 1958.
Morrey, C. B., Jr.,
Multiple Integral Problems in the Calculus of Variations and Related Topics
, University of California Press, Berkeley and Los Angeles, 1943.
Morrey, C. B., Jr.,
Multiple Integrals in the Calculus of Variations
, Am. Math. Soc. Colloguium Lectures, August 25–28, 1964. (Contains an extensive bibliography on books and recent research papers.)
Rothe, E. H., Gradient mappings,
Bull. Am. Math. Soc.
, Vol. 59, 1953, pp. 5–19.
Rothe, E. H., Remarks on the applications of gradient mappings to the calculus of variations and connected boundary value problems in partial differential equations,
Common Pure and Applied Math.
, Vol. 9, 1956, pp. 551–568.
Linear and Nonlinear Programming
Arrow, Kenneth J., L. Hurwicz, and H. Uzawa,
Studies in Linear and Nonlinear Programming
, Stanford University Press, Stanford, California, 1958.
Charnes, A. and W. W. Cooper,
Management Models and Industrial Applications of Linear Programming
, Vols. I and II, Wiley, New York, 1961.
Dantzig, G. B.,
Linear Programming and Extensions
, Princeton University Press, Princeton, New Jersey, 1964
Ficken, F. A.,
The Simplex Method of Linear Programming
, Holt, Rinehart & Winston, New York, 1961.
Gale, D.,
The Theory of Linear Economic Models
, McGraw-Hill, New York, 1960.
Gass, S. I.,
Linear Programming
, 2nd edn., McGraw-Hill, New York, 1964.
Hadley, G.,
Nonlinear and Dynamic Programming
, Addison-Wesley, Reading, Massachusetts, 1964.
Hadley, G.,
Linear Programming
, Addison-Wesley, Reading, Massachusetts, 1962.
Kunzi, Hans P. and Wilhelm Kreller,
Nichtlineare Programmieung
, Springer-Verlag, Berlin–Gottingen–Heidelberg, 1962.
Riley, Vera and S. I. Gass,
Linear Programming and Associated Techniques
, Operations Research Office, Johns Hopkins University, Chevy Chase, Maryland, 1958.
Simonnard, M.,
Programmation Linearirer
, Dunod, Paris, 1962.
Wilde, Douglass J.,
Optimum Seeking Methods
, Prentice-Hall, Englewood Cliffs, New Jersey, 1964.
Wolfe P. and R. Graves,
Recent Advances in Mathematical Programming
, McGraw-Hill, New York, 1963.
Zoutendijk, G.,
Methods of Feasible Directions
, Elsevier, Amsterdam, 1960.
Dynamic Programming
Aris, Rutherford,
Discrete Dynamic Programming
, Blaisdell Publishing Co., New York, 1964.
Dreyfus, S. E.,
Dynamic Programming and the Calculus of Variations
, Academic Press, New York, 1965.
Kaufmann, A. and R. Cruon,
La Programmation Dynamique
, Dunod, Paris, 1965.
Discrete Approach to Problems
Cassels, J. W. S.,
Diophantine Approximation
, Cambridge University Press, 1965.
Ryser, Herbert J.,
Combinatorial Mathematics
, The Mathematics Association of America, 1963.
Saaty, Thomas L.,
Optimization in Integers
, McGraw-Hill, New York, 1970.
Game, Decision, and Value Theories
Debreu, Gerard,
Theory of Value
, Wiley, New York, 1959.
Dresher, Melvin,
Games of Strategy Theory and Applications
, Prentice-Hall, Englewood Cliffs, New Jersey, 1961.
Dresher, M., A. W. Tucker, and P. Wolfe,
Contributions to the Theory of Games
, Vol. III, Princeton University Press, Princeton, New Jersey, 1957.
Harsanyi, J. C.,
Rational Behavior and Bargaining Equilibrium in Games and Social Situations
, Cambridge University Press, 1977.
Isaacs, Rufus,
Differential Games
, Wiley, New York, 1965.
Karlin, Samuel,
Mathematical Methods and Theory in Games, Programming, and Economics
, Vols. I and II, Addison-Wesley, Reading, Mass., 1959.
Kuhn, H. W. and A. W. Tucker,
Contributions to the Theory Games
, Vol. I, Princeton University Press, Princeton, New Jersey, 1950.
Luce, R. Duncan and Howard Raiffa,
Games and Decisions
, Wiley, New York, 1957.
Owen, Guillermo,
Game Theory
, W. B. Saunders Co., Philadelphia 1968.
Rapoport, Anatol,
Two-person Game Theory, The Essential Ideas
, University of Michigan Press, 1966.
Rapoport,
N-Person Game Theory Concepts and Applications
, University of Michigan Press, 1970.
Shubik, Martin,
Strategy and Market Structure
, Wiley, New York, 1959.
Tucker, A. W. and R. D. Luce,
Contributions to the Theory of Games
, Vol. IV, No. 40, Princeton University Press, Princeton, New Jersey, 1959.
Vajda, S.,
Mathematical Programming
, Addison-Wesley, Reading, Massachusetts, 1961.
Vajda, S.,
The Theory of Games and Linear Programming
, Methuen, London and Wiley, New York, 1956.
Ventzel, E. S.,
Lectures on Game Theory
, Vol. VI, Gordon & Breach, New York, 1961.
von Neumann, John and Oskar Morgenstern,
Theory of Games and Economic Behavior
, Princeton University Press, Princeton, New Jersey, 1944.