We have seen that a Poisson process is a counting process for which the times between successive events are independent and identically distributed exponential random variables. One possible generalization is to consider a counting process for which the times between successive events are independent and identically distributed with an arbitrary distribution. Such a counting process is called a renewal process.
We have seen that a Poisson process is a counting process for which the times between successive events are independent and identically distributed exponential random variables. One possible generalization is to consider a counting process for which the times between successive events are independent and identically distributed with an arbitrary distribution. Such a counting process is called a renewal process.
Let {N(t),t⩾0} be a counting process and let Xn denote the time between the (n-1)st and the nth event of this process, n⩾1.
Definition 7.1
If the sequence of nonnegative random variables {X1,X2,…} is independent and identically distributed, then the counting process {N(t),t⩾0} is said to be a renewal process.
Thus, a renewal process is a counting process such that the time until the first event occurs has some distribution F, the time between the first and second event has, independently of the time of the first event, the same distribution F, and so on. When an event occurs, we say that a renewal has taken place.
For an example of a renewal process, suppose that we have an infinite supply of lightbulbs whose lifetimes are independent and identically distributed. Suppose also that we use a single lightbulb at a time, and when it fails we immediately replace it with a new one. Under these conditions, {N(t),t⩾0} is a renewal process when N(t) represents the number of lightbulbs that have failed by time t.
For a renewal process having interarrival times X1,X2,…, let
S0=0,Sn=∑i=1nXi,n⩾1
That is, S1=X1 is the time of the first renewal; S2=X1+X2 is the time until the first renewal plus the time between the first and second renewal, that is, S2 is the time of the second renewal. In general, Sn denotes the time of the nth renewal (see Figure 7.1).
We shall let F denote the interarrival distribution and to avoid trivialities, we assume that F(0)=P{Xn=0}<1. Furthermore, we let
μ=E[Xn],n⩾1
be the mean time between successive renewals. It follows from the nonnegativity of Xn and the fact that Xn is not identically 0 that μ>0.
The first question we shall attempt to answer is whether an infinite number of renewals can occur in a finite amount of time. That is, can N(t) be infinite for some (finite) value of t? To show that this cannot occur, we first note that, as Sn is the time of the nth renewal, N(t) may be written as
N(t)=max{n:Sn⩽t}(7.1)
(7.1)
To understand why Equation (7.1) is valid, suppose, for instance, that S4⩽t but S5>t. Hence, the fourth renewal had occurred by time t but the fifth renewal occurred after time t; or in other words, N(t), the number of renewals that occurred by time t, must equal 4. Now by the strong law of large numbers it follows that, with probability 1,
Snn→μasn→∞
But since μ>0 this means that Sn must be going to infinity as n goes to infinity. Thus, Sn can be less than or equal to t for at most a finite number of values of n, and hence by Equation (7.1), N(t) must be finite.
However, though N(t)<∞ for each t, it is true that, with probability 1,
N(∞)≡limt→∞N(t)=∞
This follows since the only way in which N(∞), the total number of renewals that occur, can be finite is for one of the interarrival times to be infinite.
The distribution of N(t) can be obtained, at least in theory, by first noting the important relationship that the number of renewals by time t is greater than or equal to n if and only if the nth renewal occurs before or at time t. That is,
Now, since the random variables Xi,i⩾1, are independent and have a common distribution F, it follows that Sn=∑i=1nXi is distributed as Fn, the n-fold convolution of F with itself (Section 2.5). Therefore, from Equation (7.3) we obtain
P{N(t)=n}=Fn(t)-Fn+1(t)
Example 7.1
Suppose that P{Xn=i}=p(1-p)i-1,i⩾1. That is, suppose that the interarrival distribution is geometric. Now S1=X1 may be interpreted as the number of trials necessary to get a single success when each trial is independent and has a probability p of being a success. Similarly, Sn may be interpreted as the number of trials necessary to attain n successes, and hence has the negative binomial distribution
Equivalently, since an event independently occurs with probability p at each of the times 1,2,…
P{N(t)=n}=[t]npn(1-p)[t]-n■
Another expression for P(N(t)=n) can be obtained by conditioning on Sn. This yields
PN(t)=n=∫0∞PN(t)=n∣Sn=yfSn(y)dy
Now, if the nth event occurred at time y>t, then there would have been less than n events by time t. On the other hand, if it occurred at a time y⩽t, then there would be exactly n events by time t if the next interarrival exceeds t-y. Consequently,
If F(x)=1-eλx then Sn, being the sum of n independent exponentials with rate λ, will have a gamma (n,λ) distribution. Consequently, the preceding identity gives
The function m(t) is known as the mean-value or the renewal function.
It can be shown that the mean-value function m(t) uniquely determines the renewal process. Specifically, there is a one-to-one correspondence between the interarrival distributions F and the mean-value functions m(t).
Another interesting result that we state without proof is that
m(t)<∞for allt<∞
Remarks
(i) Since m(t) uniquely determines the interarrival distribution, it follows that the Poisson process is the only renewal process having a linear mean-value function.
(ii) Some readers might think that the finiteness of m(t) should follow directly from the fact that, with probability 1, N(t) is finite. However, such reasoning is not valid; consider the following: Let Y be a random variable having the following probability distribution:
Y=2nwith probability12n,n⩾1
Now,
P{Y<∞}=∑n=1∞P{Y=2n}=∑n=1∞12n=1
But
E[Y]=∑n=1∞2nP{Y=2n}=∑n=1∞2n12n=∞
Hence, even when Y is finite, it can still be true that E[Y]=∞.
An integral equation satisfied by the renewal function can be obtained by conditioning on the time of the first renewal. Assuming that the interarrival distribution F is continuous with density function f this yields
m(t)=E[N(t)]=∫0∞E[N(t)∣X1=x]f(x)dx(7.4)
(7.4)
Now suppose that the first renewal occurs at a time x that is less than t. Then, using the fact that a renewal process probabilistically starts over when a renewal occurs, it follows that the number of renewals by time t would have the same distribution as 1 plus the number of renewals in the first t-x time units. Therefore,
Equation (7.5) is called the renewal equation and can sometimes be solved to obtain the renewal function.
Example 7.3
One instance in which the renewal equation can be solved is when the interarrival distribution is uniform—say, uniform on (0, 1). We will now present a solution in this case when t⩽1. For such values of t, the renewal function becomes
m(t)=t+∫0tm(t-x)dx=t+∫0tm(y)dyby the substitutiony=t-x
Differentiating the preceding equation yields
m′(t)=1+m(t)
Letting h(t)=1+m(t), we obtain
h′(t)=h(t)
or
logh(t)=t+C
or
h(t)=Ket
or
m(t)=Ket-1
Since m(0)=0, we see that K=1, and so we obtain
m(t)=et-1,0⩽t⩽1■
7.3 Limit Theorems and Their Applications
We have shown previously that, with probability 1, N(t) goes to infinity as t goes to infinity. However, it would be nice to know the rate at which N(t) goes to infinity. That is, we would like to be able to say something about limt→∞N(t)/t.
As a prelude to determining the rate at which N(t) grows, let us first consider the random variable SN(t). In words, just what does this random variable represent? Proceeding inductively suppose, for instance, that N(t)=3. Then SN(t)=S3 represents the time of the third event. Since there are only three events that have occurred by time t, S3 also represents the time of the last event prior to (or at) time t. This is, in fact, what SN(t) represents—namely, the time of the last renewal prior to or at time t. Similar reasoning leads to the conclusion that SN(t)+1 represents the time of the first renewal after time t (see Figure 7.2). We now are ready to prove the following.
Proposition 7.1
With probability 1,
N(t)t→1μast→∞
Proof
Since SN(t) is the time of the last renewal prior to or at time t, and SN(t)+1 is the time of the first renewal after time t, we have
SN(t)⩽t<SN(t)+1
or
SN(t)N(t)⩽tN(t)<SN(t)+1N(t)(7.6)
(7.6)
However, since SN(t)/N(t)=∑i=1N(t)Xi/N(t) is the average of N(t) independent and identically distributed random variables, it follows by the strong law of large numbers that SN(t)/N(t)→μ as N(t)→∞. But since N(t)→∞ when t→∞, we obtain
SN(t)N(t)→μast→∞
Furthermore, writing
SN(t)+1N(t)=SN(t)+1N(t)+1N(t)+1N(t)
we have that SN(t)+1/(N(t)+1)→μ by the same reasoning as before and
N(t)+1N(t)→1ast→∞
Hence,
SN(t)+1N(t)→μast→∞
The result now follows by Equation (7.6) since t/N(t) is between two random variables, each of which converges to μ as t→∞. ■
Remarks
(i) The preceding propositions are true even when μ, the mean time between renewals, is infinite. In this case, we interpret 1/μ to be 0.
(ii) The number 1/μ is called the rate of the renewal process.
(iii) Because the average time between renewals is μ, it is quite intuitive that the average rate at which renewals occur is 1 per every μ time units. ■
Example 7.4
Beverly has a radio that works on a single battery. As soon as the battery in use fails, Beverly immediately replaces it with a new battery. If the lifetime of a battery (in hours) is distributed uniformly over the interval (30, 60), then at what rate does Beverly have to change batteries?
Solution: If we let N(t) denote the number of batteries that have failed by time t, we have by Proposition 7.1 that the rate at which Beverly replaces batteries is given by
limt→∞N(t)t=1μ=145
That is, in the long run, Beverly will have to replace one battery every 45 hours. ■
Example 7.5
Suppose in Example 7.4 that Beverly does not keep any surplus batteries on hand, and so each time a failure occurs she must go and buy a new battery. If the amount of time it takes for her to get a new battery is uniformly distributed over (0,1), then what is the average rate that Beverly changes batteries?
Solution: In this case the mean time between renewals is given by
μ=E[U1]+E[U2]
where U1 is uniform over (30, 60) and U2 is uniform over (0, 1). Hence,
μ=45+12=4512
and so in the long run, Beverly will be putting in a new battery at the rate of 291. That is, she will put in two new batteries every 91 hours. ■
Example 7.6
Suppose that potential customers arrive at a single-server bank in accordance with a Poisson process having rate λ. However, suppose that the potential customer will enter the bank only if the server is free when he arrives. That is, if there is already a customer in the bank, then our arriver, rather than entering the bank, will go home. If we assume that the amount of time spent in the bank by an entering customer is a random variable having distribution G, then
(a) what is the rate at which customers enter the bank?
(b) what proportion of potential customers actually enter the bank?
Solution: In answering these questions, let us suppose that at time 0 a customer has just entered the bank. (That is, we define the process to start when the first customer enters the bank.) If we let μG denote the mean service time, then, by the memoryless property of the Poisson process, it follows that the mean time between entering customers is
μ=μG+1λ
Hence, the rate at which customers enter the bank will be given by
1μ=λ1+λμG
On the other hand, since potential customers will be arriving at a rate λ, it follows that the proportion of them entering the bank will be given by
λ/(1+λμG)λ=11+λμG
In particular if λ=2 and μG=2, then only one customer out of five will actually enter the system. ■
A somewhat unusual application of Proposition 7.1 is provided by our next example.
Example 7.7
A sequence of independent trials, each of which results in outcome number i with probability Pi,i=1,…,n, ∑i=1nPi=1, is observed until the same outcome occurs k times in a row; this outcome then is declared to be the winner of the game. For instance, if k=2 and the sequence of outcomes is 1, 2, 4, 3, 5, 2, 1, 3, 3, then we stop after nine trials and declare outcome number 3 the winner. What is the probability that i wins, i=1,…,n, and what is the expected number of trials?
Solution: We begin by computing the expected number of coin tosses, call it E[T], until a run of k successive heads occurs when the tosses are independent and each lands on heads with probability p. By conditioning on the time of the first nonhead, we obtain
E[T]=∑j=1k(1-p)pj-1(j+E[T])+kpk
Solving this for E[T] yields
E[T]=k+(1-p)pk∑j=1kjpj-1
Upon simplifying, we obtain
E[T]=1+p+⋯+pk-1pk=1-pkpk(1-p)(7.7)
(7.7)
Now, let us return to our example, and let us suppose that as soon as the winner of a game has been determined we immediately begin playing another game. For each i let us determine the rate at which outcome i wins. Now, every time i wins, everything starts over again and thus wins by i constitute renewals. Hence, from Proposition 7.1, the
rate at whichiwins=1E[Ni]
where Ni denotes the number of trials played between successive wins of outcome i. Hence, from Equation (7.7) we see that
rate at whichiwins=Pik(1-Pi)1-Pik(7.8)
(7.8)
Hence, the long-run proportion of games that are won by number i is given by
proportion of gamesiwins=rate at whichiwins∑j=1nrate at whichjwins=Pik(1-Pi)/(1-Pik)∑j=1n(Pjk(1-Pj)/(1-Pjk))
However, it follows from the strong law of large numbers that the long-run proportion of games that i wins will, with probability 1, be equal to the probability that i wins any given game. Hence,
To compute the expected time of a game, we first note that the
rate at which games end=∑i=1nrate at whichiwins=∑i=1nPik(1-Pi)1-Pik(fromEquation(7.8))
Now, as everything starts over when a game ends, it follows by Proposition 7.1 that the rate at which games end is equal to the reciprocal of the mean time of a game. Hence,
E[time of a game}=1rate at which games end=1∑i=1n(Pik(1-Pi)/(1-Pik))■
Proposition 7.1 says that the average renewal rate up to time t will, with probability 1, converge to 1/μ as t→∞. What about the expected average renewal rate? Is it true that m(t)/t also converges to 1/μ? This result is known as the elementary renewal theorem.
Theorem 7.1
Elementary Renewal Theorem
m(t)t→1μast→∞
As before, 1/μ is interpreted as 0 when μ=∞.
Remark
At first glance it might seem that the elementary renewal theorem should be a simple consequence of Proposition 7.1. That is, since the average renewal rate will, with probability 1, converge to 1/μ, should this not imply that the expected average renewal rate also converges to 1/μ? We must, however, be careful; consider the next example.
Example 7.8
Let U be a random variable which is uniformly distributed on (0, 1); and define the random variables Yn,n⩾1, by
Yn=0,ifU>1/nn,ifU⩽1/n
Now, since, with probability 1, U will be greater than 0, it follows that Yn will equal 0 for all sufficiently large n. That is, Yn will equal 0 for all n large enough so that 1/n<U. Hence, with probability 1,
Yn→0asn→∞
However,
E[Yn]=nPU⩽1n=n1n=1
Therefore, even though the sequence of random variables Yn converges to 0, the expected values of the Yn are all identically 1. ■
To prove the elementary renewal theorem we will make use of an identity known as Wald’s equation. Before stating Wald’s equation we need to introduce the concept of a stopping time for a sequence of independent random variables.
Definition
The nonnegative integer valued random variable N is said to be a stopping time for a sequence of independent random variables X1,X2,… if the event that {N=n} is independent of Xn+1,Xn+2,…, for all n=1,2,….
The idea behind a stopping time is that we imagine that the Xi are observed in sequence, first X1, then X2, and so on, and that N denotes the number of them observed before stopping. Because the event that we stop after having observed X1,…,Xn can only depend on these n values, and not on future unobserved values, it must be independent of these future values.
Example 7.9
Suppose that X1,X2,… is a sequence of independent and identically distributed random variables with
P(Xi=1)=p=1-P(Xi=0)
where p>0. If we define
N=min(n:X1+⋯+Xn=r)
then N is a stopping time for the sequence. If we imagine that trials are being performed in sequence and that Xi=1 corresponds to a success on trial i, then N is the number of trials needed until there have been a total of r successes when each trial is independently a success with probability p. ■
Example 7.10
Suppose that X1,X2,… is a sequence of independent and identically distributed random variables with
P(Xi=1)=1/2=1-P(Xi=-1)
If
N=min(n:X1+⋯+Xn=1)
then N is a stopping time for the sequence. N can be regarded as the stopping time for a gambler who on each play is equally likely to win or lose 1, and who is going to stop the first time he is winning money. (Because the successive winnings of the gambler are a symmetric random walk, which we showed in Chapter 4 to be a recurrent Markov chain, it follows that P(N<∞)=1.) ■
We are now ready for Wald’s equation.
Theorem 7.2
Wald’s Equation
If X1,X2,…, is a sequence of independent and identically distributed random variables with finite expectation E[X], and if N is a stopping time for this sequence such that E[N]<∞, then
E∑n=1NXn=E[N]E[X]
Proof
For n=1,2,…, let
In=1,ifn⩽N0,ifn>N
and note that
∑n=1NXn=∑n=1∞XnIn
Taking expectations gives
E∑n=1NXn=E∑n=1∞XnIn=∑n=1∞E[XnIn]
Now In=1 if N⩾n, which means that In=1 if we have not yet stopped after having observed X1,…,Xn-1. But this implies that the value of In is determined before Xn has been observed, and thus Xn is independent of In. Consequently,
E[XnIn]=E[Xn]E[In]=E[X]E[In]
showing that
E∑n=1NXn=E[X]∑n=1∞E[In]=E[X]E∑n=1∞In=E[X]E[N]■
To apply Wald’s equation to renewal theory, let X1,X2,… be the sequence of interarrival times of a renewal process. If we observe these one at a time and then stop at the first renewal after time t, then we would stop after having observed X1,…,XN(t)+1, showing that N(t)+1 is a stopping time for the sequence of interarrival times. For a more formal argument that N(t)+1 is a stopping time for the sequence of interarrival times, note that N(t)=n-1 if and only if the (n-1)st renewal occurs by time t and the nth renewal occurs after time t. That is,
N(t)+1=n⇔N(t)=n-1⇔X1+⋯+Xn-1⩽t,X1+⋯+Xn>t
showing that the event that N(t)+1=n depends only on the values of X1,…,Xn. We thus have the following corollary of Wald’s equation.
Proposition 7.2
If X1,X2,…, are the interarrival times of a renewal process then
E[X1+⋯+XN(t)+1]=E[X]E[N(t)+1]
That is,
E[SN(t)+1]=μ[m(t)+1]
We are now ready to prove the elementary renewal theorem.
Proof of Elementary Renewal Theorem
Because SN(t)+1 is the time of the first renewal after t, it follows that
SN(t)+1=t+Y(t)
where Y(t), called the excess at time t, is defined as the time from t until the next renewal. Taking expectations of the preceding yields, upon applying Proposition 7.2, that
μ(m(t)+1)=t+E[Y(t)](7.9)
(7.9)
which can be written as
m(t)t=1μ+E[Y(t)]tμ-1t
Because Y(t)⩾0, the preceding yields that m(t)t⩾1μ-1t, showing that
limt→∞m(t)t⩾1μ
To show that limt→∞m(t)t⩽1μ, let us suppose that there is a value M<∞ such that P(Xi<M)=1. Because this implies that Y(t) must also be less than M, we have that E[Y(t)]<M, and so
m(t)t⩽1μ+Mtμ-1t
which gives that
limt→∞m(t)t⩽1μ
and thus completes the proof of the elementary renewal theorem when the interarrival times are bounded. When the interarrival times X1,X2,… are unbounded, fix M>0, and let NM(t),t⩾0 be the renewal process with interarrival times min(Xi,M),i⩾1. Because min(Xi,M)⩽Xi for all i, it follows that NM(t)⩾N(t) for all t. (That is, because each interarrival time of NM(t) is smaller than its corresponding interarrival time of N(t), it must have at least as many renewals by time t.) Consequently, E[N(t)]⩽E[NM(t)], showing that
limt→∞E[N(t)]t⩽limt→∞E[NM(t)]t=1E[min(Xi,M)]
where the equality follows because the interarrival times of NM(t) are bounded. Using that limM→∞E[min(Xi,M)]=E[Xi]=μ, we obtain from the preceding upon letting M→∞ that
limt→∞m(t)t⩽1μ
and the proof is completed. ■
Equation (7.9) shows that if we can determine E[Y(t)]; the mean excess at time t, then we can compute m(t) and vice versa.
Example 7.11
Consider the renewal process whose interarrival distribution is the convolution of two exponentials; that is,
F=F1∗F2,whereFi(t)=1-e-μit,i=1,2
We will determine the renewal function by first determining E[Y(t)]. To obtain the mean excess at t, imagine that each renewal corresponds to a new machine being put in use, and suppose that each machine has two components—initially component 1 is employed and this lasts an exponential time with rate μ1, and then component 2, which functions for an exponential time with rate μ2, is employed. When component 2 fails, a new machine is put in use (that is, a renewal occurs). Now consider the process {X(t),t⩾0} where X(t) is i if a type i component is in use at time t. It is easy to see that {X(t),t⩾0} is a two-state continuous-time Markov chain, and so, using the results of Example 6.11, its transition probabilities are
P11(t)=μ1μ1+μ2e-(μ1+μ2)t+μ2μ1+μ2
To compute the expected remaining life of the machine in use at time t, we condition on whether it is using its first or second component: for if it is still using its first component, then its remaining life is 1/μ1+1/μ2, whereas if it is already using its second component, then its remaining life is 1/μ2. Hence, letting p(t) denote the probability that the machine in use at time t is using its first component, we have
E[Y(t)]=1μ1+1μ2p(t)+1-p(t)μ2=1μ2+p(t)μ1
But, since at time 0 the first machine is utilizing its first component, it follows that p(t)=P11(t), and so, upon using the preceding expression of P11(t), we obtain