image
Figure 11.3

Method 2 Conditional Distribution of the Arrival Times

Recall the result for a Poisson process having rate λimage that given the number of events by time Timage the set of event times are independent and identically distributed uniform (0,T)image random variables. Now suppose that each of these events is independently counted with a probability that is equal to λ(t)/λimage when the event occurred at time timage. Hence, given the number of counted events, it follows that the set of times of these counted events are independent with a common distribution given by F(s)image, where

F(s)=P{timescounted}=P{times,counted}P{counted}=0TP{times,countedtime=x}dx/TP{counted}=0sλ(x)dx0Tλ(x)dx

image

The preceding (somewhat heuristic) argument thus shows that given nimage events of a nonhomogeneous Poisson process by time Timage the nimage event times are independent with a common density function

f(s)=λ(s)m(T),0<s<T,m(T)=0Tλ(s)ds (11.10)

image (11.10)

Since N(T)image, the number of events by time Timage, is Poisson distributed with mean m(T)image, we can simulate the nonhomogeneous Poisson process by first simulating N(T)image and then simulating N(T)image random variables from the density function of (11.10).

Example 11.12

If λ(s)=csimage, then we can simulate the first Timage time units of the nonhomogeneous Poisson process by first simulating N(T)image, a Poisson random variable having mean m(T)=0Tcsds=CT2/2image, and then simulating N(T)image random variables having distribution

F(s)=s2T2,0<s<T

image

Random variables having the preceding distribution either can be simulated by use of the inverse transform method (since F-1(U)=TUimage) or by noting that Fimage is the distribution function of max(TU1,TU2)image when U1image and U2image are independent random numbers. ■

If the distribution function specified by Equation (11.10) is not easily invertible, we can always simulate from (11.10) by using the rejection method where we either accept or reject simulated values of uniform (0,T)image random variables. That is, let h(s)=1/T,0<s<Timage. Then

f(s)h(s)=Tλ(s)m(T)λTm(T)C

image

where λimage is a bound on λ(s),0sTimage. Hence, the rejection method is to generate random numbers U1image and U2image then accept TU1image if

U2f(TU1)Ch(TU1)

image

or, equivalently, if

U2λ(TU1)λ

image

Method 3 Simulating the Event Times

The third method we shall present for simulating a nonhomogeneous Poisson process having intensity function λ(t),t0image is probably the most basic approach—namely, to simulate the successive event times. So let X1,X2,image denote the event times of such a process. As these random variables are dependent we will use the conditional distribution approach to simulation. Hence, we need the conditional distribution of Xiimage given X1,,Xi-1image.

To start, note that if an event occurs at time ximage then, independent of what has occurred prior to ximage, the time until the next event has the distribution Fximage given by

F¯x(t)=P{0events in(x,x+t)event atx}=P{0events in(x,x+t)}by independent increments=exp-0tλ(x+y)dy

image

Differentiation yields that the density corresponding to Fximage is

fx(t)=λ(x+t)exp-0tλ(x+y)dy

image

implying that the hazard rate function of Fximage is

rx(t)=fx(t)F¯x(t)=λ(x+t)

image

We can now simulate the event times X1,X2,image by simulating X1image from F0image; then if the simulated value of X1image is x1image, simulate X2image by adding x1image to a value generated from Fx1image, and if this sum is x2image simulate X3image by adding x2image to a value generated from Fx2image, and so on. The method used to simulate from these distributions should depend, of course, on the form of these distributions. However, it is interesting to note that if we let λimage be such that λ(t)λimage and use the hazard rate method to simulate, then we end up with the approach of Method 1 (we leave the verification of this fact as an exercise). Sometimes, however, the distributions Fximage can be easily inverted and so the inverse transform method can be applied.

Example 11.13

Suppose that λ(x)=1/(x+a),x0image. Then

0tλ(x+y)dy=logx+a+tx+a

image

Hence,

Fx(t)=1-x+ax+a+t=tx+a+t

image

and so

Fx-1(u)=(x+a)u1-u

image

We can, therefore, simulate the successive event times X1,X2,image by generating U1,U2,image and then setting

X1=aU11-U1,X2=(X1+a)U21-U2+X1

image

and, in general,

Xj=(Xj-1+a)Uj1-Uj+Xj-1,j2

image

11.5.2 Simulating a Two-Dimensional Poisson Process

A point process consisting of randomly occurring points in the plane is said to be a two-dimensional Poisson process having rate λimage if

(a) the number of points in any given region of area Aimage is Poisson distributed with mean λAimage; and

(b) the numbers of points in disjoint regions are independent.

For a given fixed point O in the plane, we now show how to simulate events occurring according to a two-dimensional Poisson process with rate λimage in a circular region of radius rimage centered about Oimage. Let Ri,i1image, denote the distance between Oimage and its iimageth nearest Poisson point, and let C(a)image denote the circle of radius aimage centered at Oimage. Then

PπR12>b=PR1>bπ=Pno points inCb/π=e-λb

image

Also, with C(a2)-C(a1)image denoting the region between C(a2)image and C(a1)image:

PπR22-πR12>bR1=r=PR2>(b+πr2)/πR1=r=Pno points inC(b+πr2)/π-C(r)R1=r=Pno points inC(b+πr2)/π-C(r)by(b)=e-λb

image

In fact, the same argument can be repeated to obtain the following.

Proposition 11.6

With R0=0image,

πRi2-πRi-12,i1,

image

are independent exponentials with rate λimage.

In other words, the amount of area that needs to be traversed to encompass a Poisson point is exponential with rate λimage. Since, by symmetry, the respective angles of the Poisson points are independent and uniformly distributed over (0,2π)image, we thus have the following algorithm for simulating the Poisson process over a circular region of radius rimage about Oimage:

Step 1 Generate independent exponentials with rate 1, X1,X2,image, stopping at

N=minn:X1++Xnλπ>r2

image

Step 2 If N=1image, stop. There are no points in C(r)image. Otherwise, for i=1,,N-1image, set

Ri=(X1++Xi)/λπ

image

Step 3 Generate independent uniform (0,1)image random variables U1,,UN-1image.

Step 4 Return the N-1image Poisson points in C(r)image whose polar coordinates are

(Ri,2πUi),i=1,,N-1

image

The preceding algorithm requires, on average, 1+λπr2image exponentials and an equal number of uniform random numbers. Another approach to simulating points in C(r)image is to first simulate Nimage, the number of such points, and then use the fact that, given Nimage, the points are uniformly distributed in C(r)image. This latter procedure requires the simulation of Nimage, a Poisson random variable with mean λπr2image; we must then simulate Nimage uniform points on C(r)image, by simulating Rimage from the distribution FR(a)=a2/r2image (see Exercise 25) and θimage from uniform (0,2π)image and must then sort these Nimage uniform values in increasing order of Rimage. The main advantage of the first procedure is that it eliminates the need to sort.

The preceding algorithm can be thought of as the fanning out of a circle centered at Oimage with a radius that expands continuously from 0 to rimage. The successive radii at which Poisson points are encountered is simulated by noting that the additional area necessary to encompass a Poisson point is always, independent of the past, exponential with rate λimage. This technique can be used to simulate the process over noncircular regions. For instance, consider a nonnegative function g(x)image, and suppose we are interested in simulating the Poisson process in the region between the ximage-axis and gimage with ximage going from 0 to Timage (see Figure 11.4). To do so we can start at the left-hand end and fan vertically to the right by considering the successive areas 0ag(x)dximage. Now if X1<X2<image denote the successive projections of the Poisson points on the ximage-axis, then analogous to Proposition 11.6, it will follow that (with X0=0image) λXi-1Xig(x)dx,i1image, will be independent exponentials with rate 1. Hence, we should simulate 1,2,image, independent exponentials with rate 1, stopping at

N=minn:1++n>λ0Tg(x)dx

image

and determine X1,,XN-1image by

λ0X1g(x)dx=1,λX1X2g(x)dx=2,λXN-2XN-1g(x)dx=N-1

image

If we now simulate U1,,UN-1image—independent uniform (0,1)image random numbers—then as the projection on the yimage-axis of the Poisson point whose ximage-coordinate is Xiimage is uniform on (0, g(Xi)image), it follows that the simulated Poisson points in the interval are (Xi,Uig(Xi)),i=1,,N-1image.

image
Figure 11.4

Of course, the preceding technique is most useful when gimage is regular enough so that the foregoing equations can be solved for the Xiimage. For instance, if g(x)=yimage (and so the region of interest is a rectangle), then

Xi=1++iλy,i=1,,N-1

image

and the Poisson points are

(Xi,yUi),i=1,,N-1

image

11.6 Variance Reduction Techniques

Let X1,,Xnimage have a given joint distribution, and suppose we are interested in computing

θE[g(X1,,Xn)]

image

where gimage is some specified function. It is often the case that it is not possible to analytically compute the preceding, and when such is the case we can attempt to use simulation to estimate θimage. This is done as follows: Generate X1(1),,Xn(1)image having the same joint distribution as X1,,Xnimage and set

Y1=gX1(1),,Xn(1)

image

Now, simulate a second set of random variables (independent of the first set) X1(2),,Xn(2)image having the distribution of X1,,Xnimage and set

Y2=gX1(2),,Xn(2)

image

Continue this until you have generated kimage (some predetermined number) sets, and so have also computed Y1,Y2,,Ykimage. Now, Y1,,Ykimage are independent and identically distributed random variables each having the same distribution of g(X1,,Xn)image. Thus, if we let Y¯image denote the average of these kimage random variables—that is,

Y¯=i=1kYi/k

image

then

E[Y¯]=θ,E(Y¯-θ)2=Var(Y¯)

image

Hence, we can use Y¯image as an estimate of θimage. As the expected square of the difference between Y¯image and θimage is equal to the variance of Y¯image, we would like this quantity to be as small as possible. In the preceding situation, Var(Y¯)=Var(Yi)/kimage, which is usually not known in advance but must be estimated from the generated values Y1,,Ynimage. We now present three general techniques for reducing the variance of our estimator.

11.6.1 Use of Antithetic Variables

In the preceding situation, suppose that we have generated Y1image and Y2image, identically distributed random variables having mean θimage. Now,

VarY1+Y22=14[Var(Y1)+Var(Y2)+2Cov(Y1,Y2)]=Var(Y1)2+Cov(Y1,Y2)2

image

Hence, it would be advantageous (in the sense that the variance would be reduced) if Y1image and Y2image rather than being independent were negatively correlated. To see how we could arrange this, let us suppose that the random variables X1,,Xnimage are independent and, in addition, that each is simulated via the inverse transform technique. That is, Xiimage is simulated from Fi-1(Ui)image where Uiimage is a random number and Fiimage is the distribution of Xiimage. Hence, Y1image can be expressed as

Y1=gF1-1(U1),,Fn-1(Un)

image

Now, since 1-Uimage is also uniform over (0,1)image whenever Uimage is a random number (and is negatively correlated with Uimage) it follows that Y2image defined by

Y2=gF1-1(1-U1),,Fn-1(1-Un)

image

will have the same distribution as Y1image. Hence, if Y1image and Y2image were negatively correlated, then generating Y2image by this means would lead to a smaller variance than if it were generated by a new set of random numbers. (In addition, there is a computational savings since rather than having to generate nimage additional random numbers, we need only subtract each of the previous nimage from 1.) The following theorem will be the key to showing that this technique—known as the use of antithetic variables—will lead to a reduction in variance whenever gimage is a monotone function.

Theorem 11.1

If X1,,Xnimage are independent, then, for any increasing functions fimage and gimage of nimage variables,

E[f(X)g(X)]E[f(X)]E[g(X)] (11.11)

image (11.11)

where X=(X1,,Xn)image.

Proof

The proof is by induction on nimage. To prove it when n=1image, let fimage and gimage be increasing functions of a single variable. Then, for any ximage and yimage,

(f(x)-f(y))(g(x)-g(y))0

image

since if xy(xy)image then both factors are nonnegative (nonpositive). Hence, for any random variables Ximage and Yimage,

(f(X)-f(Y))(g(X)-g(Y))0

image

implying that

E[(f(X)-f(Y))(g(X)-g(Y))]0

image

or, equivalently,

E[f(X)g(X)]+E[f(Y)g(Y)]E[f(X)g(Y)]+E[f(Y)g(X)]

image

If we suppose that Ximage and Yimage are independent and identically distributed, as in this case, then

E[f(X)g(X)]=E[f(Y)g(Y)],E[f(X)g(Y)]=E[f(Y)g(X)]=E[f(X)]E[g(X)]

image

and so we obtain the result when n=1image.

So assume that (11.11) holds for n-1image variables, and now suppose that X1,,Xnimage are independent and fimage and gimage are increasing functions. Then

E[f(X)g(X)Xn=xn]=E[f(X1,,Xn-1,xn)g(X1,,Xn-1,xn)Xn=x]=E[f(X1,,Xn-1,xn)g(X1,,Xn-1,xn)]by independenceE[f(X1,,Xn-1,xn)]E[g(X1,,Xn-1,xn)]by the induction hypothesis=E[f(X)Xn=xn]E[g(X)Xn=xn]

image

Hence,

E[f(X)g(X)Xn]E[f(X)Xn]E[g(X)Xn]

image

and, upon taking expectations of both sides,

E[f(X)g(X)]EE[f(X)Xn]E[g(X)Xn]E[f(X)]E[g(X)]

image

The last inequality follows because E[f(X)Xn]image and E[g(X)Xn]image are both increasing functions of Xnimage, and so, by the result for n=1image,

EE[f(X)Xn]E[g(X)Xn]EE[f(X)Xn]EE[g(X)Xn]=E[f(X)]E[g(X)]

image

Corollary 11.7

If U1,,Unimage are independent, and kimage is either an increasing or decreasing function, then

Covk(U1,,Un),k(1-U1,,1-Un)0

image

Proof

Suppose kimage is increasing. As -k(1-U1,,1-Un)image is increasing in U1,,Unimage, then, from Theorem 11.1,

Covk(U1,,Un),k(1-U1,,1-Un)0

image

When kimage is decreasing just replace kimage by its negative. ■

Since Fi-1(Ui)image is increasing in Uiimage (as Fiimage, being a distribution function, is increasing) it follows that g(F1-1(U1),,Fn-1(Un))image is a monotone function of U1,,Unimage whenever gimage is monotone. Hence, if gimage is monotone the antithetic variable approach of twice using each set of random numbers U1,,Unimage by first computing g(F1-1(U1),,Fn-1(Un))image and then g(F1-1(1-U1),,Fn-1(1-Un))image will reduce the variance of the estimate of E[g(X1,,Xn)]image. That is, rather than generating kimage sets of nimage random numbers, we should generate k/2image sets and use each set twice.

Example 11.14

Simulating the Reliability Function

Consider a system of nimage components in which component iimage, independently of other components, works with probability pi,i=1,,nimage. Letting

Xi=1,if componentiworks0,otherwise

image

suppose there is a monotone structure function ϕimage such that

ϕ(X1,,Xn)=1,if the system works underX1,,Xn0,otherwise

image

We are interested in using simulation to estimate

r(p1,,pn)E[ϕ(X1,,Xn)]=P{ϕ(X1,,Xn)=1}

image

Now, we can simulate the Xiimage by generating uniform random numbers U1,,Unimage and then setting

Xi=1,ifUi<pi0,otherwise

image

Hence, we see that

ϕ(X1,,Xn)=k(U1,,Un)

image

where kimage is a decreasing function of U1,,Unimage. Hence,

Cov(k(U),k(1-U))0

image

and so the antithetic variable approach of using U1,,Unimage to generate both k(U1,,Un)image and k(1-U1,,1-Un)image results in a smaller variance than if an independent set of random numbers was used to generate the second kimage. ■

Example 11.15

Simulating a Queueing System

Consider a given queueing system, let Diimage denote the delay in queue of the iimageth arriving customer, and suppose we are interested in simulating the system so as to estimate

θ=E[D1++Dn]

image

Let X1,,Xnimage denote the first nimage interarrival times and S1,,Snimage the first nimage service times of this system, and suppose these random variables are all independent. Now in most systems D1++Dnimage will be a function of X1,,Xn,S1,,Snimage—say,

D1++Dn=g(X1,,Xn,S1,,Sn)

image

Also, gimage will usually be increasing in Siimage and decreasing in Xi,i=1,,nimage. If we use the inverse transform method to simulate Xi,Si,i=1,,nimage—say, Xi=Fi-1(1-Ui),Si=Gi-1(U¯i)image where U1,,Un,U¯1,,U¯nimage are independent uniform random numbers—then we may write

D1++Dn=k(U1,,Un,U¯1,,U¯n)

image

where kimage is increasing in its variates. Hence, the antithetic variable approach will reduce the variance of the estimator of θimage. (Thus, we would generate Ui,U¯i,i=1,,nimage and set Xi=Fi-1(1-Ui)image and Yi=Gi-1(U¯i)image for the first run, and Xi=Fi-1(Ui)image and Yi=Gi-1(1-U¯i)image for the second.) As all the Uiimage and U¯iimage are independent, however, this is equivalent to setting Xi=Fi-1(Ui),Yi=Gi-1(U¯i)image in the first run and using 1-Uiimage for Uiimage and 1-U¯iimage for U¯iimage in the second. ■

11.6.2 Variance Reduction by Conditioning

Let us start by recalling (see Proposition 3.1) the conditional variance formula

Var(Y)=E[Var(YZ)]+Var(E[YZ]) (11.12)

image (11.12)

Now suppose we are interested in estimating E[g(X1,,Xn)]image by simulating X=(X1,,Xn)image and then computing Y=g(X1,,Xn)image. Now, if for some random variable Zimage we can compute E[YZ]image then, as Var(YZ)0image, it follows from the conditional variance formula that

Var(E[YZ])Var(Y)

image

implying, since E[E[YZ]]=E[Y]image, that E[YZ]image is a better estimator of E[Y]image than is Yimage.

In many situations, there are a variety of Ziimage that can be conditioned on to obtain an improved estimator. Each of these estimators E[YZi]image will have mean E[Y]image and smaller variance than does the raw estimator Yimage. We now show that for any choice of weights λi,λi0,iλi=1,iλiE[YZi]image is also an improvement over Yimage.

Proof

The proof of (a) is immediate. To prove (b), let Nimage denote an integer valued random variable independent of all the other random variables under consideration and such that

P{N=i}=λi,i1

image

Applying the conditional variance formula twice yields

Var(Y)Var(E[YN,ZN])VarE[E[YN,ZN]Z1,]=VariλiE[YZi]

image

Example 11.16

Consider a queueing system having Poisson arrivals and suppose that any customer arriving when there are already Nimage others in the system is lost. Suppose that we are interested in using simulation to estimate the expected number of lost customers by time timage. The raw simulation approach would be to simulate the system up to time timage and determine Limage, the number of lost customers for that run. A better estimate, however, can be obtained by conditioning on the total time in [0,t]image that the system is at capacity. Indeed, if we let Timage denote the time in [0,t]image that there are Nimage in the system, then

E[LT]=λT

image

where λimage is the Poisson arrival rate. Hence, a better estimate for E[L]image than the average value of Limage over all simulation runs can be obtained by multiplying the average value of Timage per simulation run by λimage. If the arrival process were a nonhomogeneous Poisson process, then we could improve over the raw estimator Limage by keeping track of those time periods for which the system is at capacity. If we let I1,,ICimage denote the time intervals in [0,t]image in which there are Nimage in the system, then

E[LI1,,IC]=i=1CIiλ(s)ds

image

where λ(s)image is the intensity function of the nonhomogeneous Poisson arrival process. The use of the right side of the preceding would thus lead to a better estimate of E[L]image than the raw estimator Limage. ■

Example 11.17

Suppose that we wanted to estimate the expected sum of the times in the system of the first nimage customers in a queueing system. That is, if Wiimage is the time that the iimageth customer spends in the system, then we are interested in estimating

θ=Ei=1nWi

image

Let Yiimage denote the “state of the system” at the moment at which the iimageth customer arrives. It can be shown§ that for a wide class of models the estimator i=1nE[WiYi]image has (the same mean and) a smaller variance than the estimator i=1nWiimage. (It should be noted that whereas it is immediate that E[WiYi]image has smaller variance than Wiimage, because of the covariance terms involved it is not immediately apparent that i=1nE[WiYi]image has smaller variance than i=1nWiimage.) For instance, in the model G/M/1image

E[WiYi]=(Ni+1)/μ

image

where Niimage is the number in the system encountered by the iimageth arrival and 1/μimage is the mean service time; the result implies that i=1n(Ni+1)/μimage is a better estimate of the expected total time in the system of the first nimage customers than is the raw estimator i=1nWiimage. ■

Example 11.18

Estimating the Renewal Function by Simulation

Consider a queueing model in which customers arrive daily in accordance with a renewal process having interarrival distribution Fimage. However, suppose that at some fixed time Timage, for instance 5 P.M., no additional arrivals are permitted and those customers that are still in the system are serviced. At the start of the next and each succeeding day customers again begin to arrive in accordance with the renewal process. Suppose we are interested in determining the average time that a customer spends in the system. Upon using the theory of renewal reward processes (with a cycle starting every Timage time units), it can be shown that

average time that a customer spends in the system=E[sum of the times in the system of arrivals in(0,T)]m(T)

image

where m(T)image is the expected number of renewals in (0, Timage).

If we were to use simulation to estimate the preceding quantity, a run would consist of simulating a single day, and as part of a simulation run, we would observe the quantity N(T)image, the number of arrivals by time Timage. Since E[N(T)]=m(T)image, the natural simulation estimator of m(T)image would be the average (over all simulated days) value of N(T)image obtained. However, Var(N(T))image is, for large Timage, proportional to Timage (its asymptotic form being Tσ2/μ3image, where σ2image is the variance and μimage the mean of the interarrival distribution Fimage), and so, for large Timage, the variance of our estimator would be large. A considerable improvement can be obtained by using the analytic formula (see Section 7.3)

m(T)=Tμ-1+E[Y(T)]μ (11.13)

image (11.13)

where Y(T)image denotes the time from Timage until the next renewal—that is, it is the excess life at Timage. Since the variance of Y(T)image does not grow with Timage (indeed, it converges to a finite value provided the moments of Fimage are finite), it follows that for Timage large, we would do much better by using the simulation to estimate E[Y(T)]image and then using Equation (11.13) to estimate m(T)image.

However, by employing conditioning, we can improve further on our estimate of m(T)image. To do so, let A(T)image denote the age of the renewal process at time Timage—that is, it is the time at Timage since the last renewal. Then, rather than using the value of Y(T)image, we can reduce the variance by considering E[Y(T)A(T)]image. Now, knowing that the age at Timage is equal to ximage is equivalent to knowing that there was a renewal at time T-ximage and the next interarrival time Ximage is greater than ximage. Since the excess at Timage will equal X-ximage (see Figure 11.5), it follows that

E[Y(T)A(T)=x]=E[X-xX>x]=0P{X-x>t}P{X>x}dt=0[1-F(t+x)]1-F(x)dt

image

which can be numerically evaluated if necessary.

As an illustration of the preceding note that if the renewal process is a Poisson process with rate λimage, then the raw simulation estimator N(T)image will have variance λTimage; since Y(T)image will be exponential with rate λimage, the estimator based on (11.13) will have variance λ2Var{Y(T)}=1image. On the other hand, since Y(T)image will be independent of A(T)image (and E[Y(T)A(T)]=1/λimage), it follows that the variance of the improved estimator E[Y(T)A(T)]image is 0. That is, conditioning on the age at time Timage yields, in this case, the exact answer. ■

Example 11.19

Consider the M/G/1image queueing system where customers arrive in accordance with a Poisson process with rate λimage to a single server having service distribution Gimage with mean E[S]image. Suppose that, for a specified time t0image, the server will take a break at the first time tt0image at which the system is empty. That is, if X(t)image is the number of customers in the system at time timage, then the server will take a break at time

T=min{tt0:X(t)=0}

image

To efficiently use simulation to estimate E[T]image, generate the system to time t0image; let Rimage denote the remaining service time of the customer in service at time t0image, and let XQimage equal the number of customers waiting in queue at time t0image. (Note that Rimage is equal to 0image if X(t0)=0image, and XQ=(X(t0)-1)+image.) Now, with Nimage equal to the number of customers that arrive in the remaining service time Rimage, it follows that if N=nimage and XQ=nQimage, then the additional amount of time from t0+Rimage until the server can take a break is equal to the amount of time that it takes until the system, starting with n+nQimage customers, becomes empty. Because this is equal to the sum of n+nQimage busy periods, it follows from Section 8.5.3 that

E[TR,N,XQ]=t0+R+(N+XQ)E[S]1-λE[S]

image

Consequently,

E[TR,XQ]=EE[TR,N,XQ]R,XQ=t0+R+(E[NR,XQ]+XQ)E[S]1-λE[S]=t0+R+(λR+XQ)E[S]1-λE[S]

image

Thus, rather than using the generated value of Timage as the estimator from a simulation run, it is better to stop the simulation at time t0image and use the estimator t0+(λR+XQ)E[S]1-λE[S]image. ■

image
Figure 11.5 A(T)=ximage.

11.6.3 Control Variates

Again suppose we want to use simulation to estimate E[g(X)]image where X=(X1,,Xn)image. But now suppose that for some function fimage the expected value of f(X)image is known—say, E[f(X)]=μimage. Then for any constant aimage we can also use

W=g(X)+a(f(X)-μ)

image

as an estimator of E[g(X)]image. Now,

Var(W)=Var(g(X))+a2Var(f(X))+2aCov(g(X),f(X))