Back in the good old days of the Net, circa the early 1990s, we didn’t have the Web and HTTP and graphical browsers. Instead, we had Usenet news and FTP and command-line interfaces, and we liked it that way! But as good as the good old days were, there were some problems. For instance, when we were downloading kilobytes of free software from a popular FTP site over our 2,400 bps modems using Kermit, we would often encounter error messages like this one:
%ftp eunl.java.sun.com
Connected to eunl.javasoft.com. 220 softwarenl FTP server (wu-2.4.2-academ[BETA- 16]+opie-2.32(1) 981105) ready. Name (eunl.java.sun.com:elharo):anonymous
530- 530- Server is busy. Please try again later or try one of our other 530- ftp servers at ftp.java.sun.com. Thank you. 530- 530 User anonymous access denied. Login failed.
In fact, in the days when the Internet had only a few million users instead of a few billion, we were far more likely to come across an overloaded and congested site than we are today. The problem was that most FTP servers forked a new process for each connection (i.e., 100 simultaneous users meant 100 additional processes to handle). Because processes are fairly heavyweight items, too many could rapidly bring a server to its knees. The problem wasn’t that the machines weren’t powerful enough or the network fast enough; it was that the FTP servers were poorly implemented. Many more simultaneous users could be served if a new process wasn’t needed for each connection.
Early web servers suffered from this problem as well, although the problem was masked a little by the transitory nature of HTTP connections. Because web pages and their embedded images tend to be small (at least compared to the software archives commonly retrieved by FTP) and because web browsers “hang up” the connection after each file is retrieved instead of staying connected for minutes or hours at a time, web users don’t put nearly as much load on a server as FTP users do. However, web server performance still degrades as usage grows. The fundamental problem is that while it’s easy to write code that handles each incoming connection and each new task as a separate process (at least on Unix), this solution doesn’t scale. By the time a server is attempting to handle a thousand or more simultaneous connections, performance slows to a crawl.
There are at least two solutions to this problem. The first is to reuse processes rather than spawning new ones. When the server starts up, a fixed number of processes (say, 300) are spawned to handle requests. Incoming requests are placed in a queue. Each process removes one request from the queue, services the request, then returns to the queue to get the next request. There are still 300 separate processes running, but because all the overhead of building up and tearing down the processes is avoided, these 300 processes can now do the work of 1,000. These numbers are rough estimates. Your exact mileage may vary, especially if your server hasn’t yet reached the volume where scalability issues come into play. Still, whatever mileage you get out of spawning new processes, you should be able to do much better by reusing old processes.
The second solution to this problem is to use lightweight threads instead of heavyweight processes to handle connections. Whereas each separate process has its own block of memory, threads are easier on resources because they share memory. Using threads instead of processes can buy you another factor of three in server performance. By combining this with a pool of reusable threads (as opposed to a pool of reusable processes), your server can run nine times faster, all on the same hardware and network connection! The impact of running many different threads on the server hardware is relatively minimal because they all run within one process. Most Java virtual machines keel over due to memory exhaustion somewhere between 4,000 and 20,000 simultaneous threads. However, by using a thread pool instead of spawning new threads for each connection, fewer than a hundred threads can handle thousands of short connections per minute.
Unfortunately, this increased performance doesn’t come for free. There’s a cost in program complexity. In particular, multithreaded servers (and other multithreaded programs) require developers to address concerns that aren’t issues for single-threaded programs, particularly issues of safety and liveness. Because different threads share the same memory, it’s entirely possible for one thread to stomp all over the variables and data structures used by another thread. This is similar to the way one program running on a nonmemory-protected operating system such as Windows 95 can crash the entire system. Consequently, different threads have to be extremely careful about which resources they use when. Generally, each thread must agree to use certain resources only when it’s sure those resources can’t change or that it has exclusive access to them. However, it’s also possible for two threads to be too careful, each waiting for exclusive access to resources it will never get. This can lead to deadlock, in which two threads are each waiting for resources the other possesses. Neither thread can proceed without the resources that the other thread has reserved, but neither is willing to give up the resources it has already.
A thread with a little t is a separate, independent path of execution in the virtual machine. A Thread
with a capital T is an instance of the java.lang.Thread
class. There is a one-to-one relationship between threads executing in the virtual machine and Thread
objects constructed by the virtual machine. Most of the time it’s obvious from the context which one is meant if the difference is really important.
To start a new thread running in the virtual machine, you construct an instance of the Thread
class and invoke its start()
method, like this:
Thread
t
=
new
Thread
();
t
.
start
();
Of course, this thread isn’t very interesting because it doesn’t have anything to do. To give a thread something to do, you either subclass the Thread
class and override its run()
method, or implement the Runnable
interface and pass the Runnable
object to the Thread
constructor. I generally prefer the second option because it separates the task that the thread performs from the thread itself more cleanly, but you will see both techniques used in this book and elsewhere. In both cases, the key is the run()
method, which has this signature:
public
void
run
()
You’re going to put all the work the thread does in this one method. This method may invoke other methods; it may construct other objects; it may even spawn other threads. However, the thread starts here and it stops here. When the run()
method completes, the thread dies. In essence, the run()
method is to a thread what the main()
method is to a traditional nonthreaded program. A single-threaded program exits when the main()
method returns. A multithreaded program exits when both the main()
method and the run()
methods of all nondaemon threads return. (Daemon threads perform background tasks such as garbage collection and don’t prevent the virtual machine from exiting.)
Consider a program that calculates the Secure Hash Algorithm (SHA) digest for many files. To a large extent, this is an I/O-bound program (i.e., its speed is limited by the amount of time it takes to read the files from the disk). If you write it as a standard program that processes the files in series, the program is going to spend a lot of time waiting for the hard drive to return the data. This limit is even more characteristic of network programs: they execute faster than the network can supply input. Consequently, they spend a lot of time blocked. This is time that other threads could use, either to process other input sources or to do something that doesn’t rely on slow input. (Not all threaded programs share this characteristic. Sometimes, even if none of the threads have a lot of spare time to allot to other threads, it’s simply easier to design a program by breaking it into multiple threads that perform independent operations.)
Example 3-1 is a subclass of Thread
whose run()
method calculates a 256-bit SHA-2 message digest for a specified file. It does this by reading the file with a DigestInputStream
. This filter stream calculates a cryptographic hash function as it reads the file. When it’s finished reading, the hash function is available from the digest()
method.
import
java.io.*
;
import
java.security.*
;
import
javax.xml.bind.*
;
// for DatatypeConverter; requires Java 6 or JAXB 1.0
public
class
DigestThread
extends
Thread
{
private
String
filename
;
public
DigestThread
(
String
filename
)
{
this
.
filename
=
filename
;
}
@Override
public
void
run
()
{
try
{
FileInputStream
in
=
new
FileInputStream
(
filename
);
MessageDigest
sha
=
MessageDigest
.
getInstance
(
"SHA-256"
);
DigestInputStream
din
=
new
DigestInputStream
(
in
,
sha
);
while
(
din
.
read
()
!=
-
1
)
;
din
.
close
();
byte
[]
digest
=
sha
.
digest
();
StringBuilder
result
=
new
StringBuilder
(
filename
);
result
.
append
(
": "
);
result
.
append
(
DatatypeConverter
.
printHexBinary
(
digest
));
System
.
out
.
println
(
result
);
}
catch
(
IOException
ex
)
{
System
.
err
.
println
(
ex
);
}
catch
(
NoSuchAlgorithmException
ex
)
{
System
.
err
.
println
(
ex
);
}
}
public
static
void
main
(
String
[]
args
)
{
for
(
String
filename
:
args
)
{
Thread
t
=
new
DigestThread
(
filename
);
t
.
start
();
}
}
}
The main()
method reads filenames from the command line and starts a new DigestThread
for each one. The work of the thread is actually performed in the run()
method. Here, a DigestInputStream
reads the file. Then the resulting digest is printed on System.out
in hexadecimal encoding. Notice that the entire output from this thread is first built in a local StringBuilder
variable result
. This is then printed on the console with one method invocation. The more obvious path of printing the pieces one at a time using System.out.print()
is not taken. There’s a reason for that, which we’ll discuss soon.
Because the signature of the run()
method is fixed, you can’t pass arguments to it or return values from it. Consequently, you need different ways to pass information into the thread and get information out of it. The simplest way to pass information in is to pass arguments to the constructor, which sets fields in the Thread
subclass, as done here.
Getting information out of a thread back into the original calling thread is trickier because of the asynchronous nature of threads. Example 3-1 sidesteps that problem by never passing any information back to the calling thread and simply printing the results on System.out
. Most of the time, however, you’ll want to pass the information to other parts of the program. You can store the result of the calculation in a field and provide a getter method to return the value of that field. However, how do you know when the calculation of that value is complete? What do you return if somebody calls the getter method before the value has been calculated? This is quite tricky, and we’ll discuss it more later in this chapter.
If you subclass Thread
, you should override run()
and nothing else! The various other methods of the Thread
class—for example, start()
, interrupt()
, join()
, sleep()
, and so on—all have very specific semantics and interactions with the virtual machine that are difficult to reproduce in your own code. You should override run()
and provide additional constructors and other methods as necessary, but you should not replace any of the other standard Thread
methods.
One way to avoid overriding the standard Thread
methods is not to subclass Thread
. Instead, write the task you want the thread to perform as an instance of the Runnable
interface. This interface declares the run()
method, exactly the same as the Thread
class:
public
void
run
()
Other than this method, which any class implementing this interface must provide, you are completely free to create any other methods with any other names you choose, all without any possibility of unintentionally interfering with the behavior of the thread. This also allows you to place the thread’s task in a subclass of some other class, such as Applet
or HTTPServlet
. To start a thread that performs the Runnable
’s task, pass the Runnable
object to the Thread
constructor. For example:
Thread
t
=
new
Thread
(
myRunnableObject
);
t
.
start
();
It’s easy to recast most problems that subclass Thread
into Runnable
forms. Example 3-2 demonstrates this by rewriting Example 3-1 to use the Runnable
interface rather than subclassing Thread
. Aside from the name change, the only modifications that are necessary are changing extends Thread
to implements
Runnable
and passing a DigestRunnable
object to the Thread
constructor in the main()
method. The essential logic of the program is unchanged.
import
java.io.*
;
import
java.security.*
;
import
javax.xml.bind.*
;
// for DatatypeConverter; requires Java 6 or JAXB 1.0
public
class
DigestRunnable
implements
Runnable
{
private
String
filename
;
public
DigestRunnable
(
String
filename
)
{
this
.
filename
=
filename
;
}
@Override
public
void
run
()
{
try
{
FileInputStream
in
=
new
FileInputStream
(
filename
);
MessageDigest
sha
=
MessageDigest
.
getInstance
(
"SHA-256"
);
DigestInputStream
din
=
new
DigestInputStream
(
in
,
sha
);
while
(
din
.
read
()
!=
-
1
)
;
din
.
close
();
byte
[]
digest
=
sha
.
digest
();
StringBuilder
result
=
new
StringBuilder
(
filename
);
result
.
append
(
": "
);
result
.
append
(
DatatypeConverter
.
printHexBinary
(
digest
));
System
.
out
.
println
(
result
);
}
catch
(
IOException
ex
)
{
System
.
err
.
println
(
ex
);
}
catch
(
NoSuchAlgorithmException
ex
)
{
System
.
err
.
println
(
ex
);
}
}
public
static
void
main
(
String
[]
args
)
{
for
(
String
filename
:
args
)
{
DigestRunnable
dr
=
new
DigestRunnable
(
filename
);
Thread
t
=
new
Thread
(
dr
);
t
.
start
();
}
}
}
There’s no strong reason to prefer implementing Runnable
to extending Thread
or vice versa in the general case. In a few special cases, such as Example 3-14 later in this chapter, it may be useful to invoke some instance methods of the Thread
class from within the constructor for each Thread
object. This requires using a subclass. In other specific cases, it may be necessary to place the run()
method in a class that extends another class, such as HTTPServlet
, in which case the Runnable
interface is essential. Finally, some object-oriented purists argue that the task that a thread undertakes is not really a kind of Thread
, and therefore should be placed in a separate class or interface such as Runnable
rather than in a subclass of Thread
. I half agree with them, although I don’t think the argument is as strong as it’s sometimes made out to be. Consequently, I’ll mostly use the Runnable
interface in this book, but you should feel free to do whatever seems most convenient.
One of the hardest things for programmers accustomed to traditional, single-threaded procedural models to grasp when moving to a multithreaded environment is how to return information from a thread. Getting information out of a finished thread is one of the most commonly misunderstood aspects of multithreaded programming. The run()
method and the start()
method don’t return any values. For example, suppose that instead of simply printing out the SHA-256 digest, as in Examples 3-1 and 3-2, the digest thread needs to return the digest to the main thread of execution. Most people’s first reaction is to store the result in a field and provide a getter method, as shown in Examples 3-3 and 3-4. Example 3-3 is a Thread
subclass that calculates a digest for a specified file. Example 3-4 is a simple command-line user interface that receives filenames and spawns threads to calculate digests for them.
import
java.io.*
;
import
java.security.*
;
public
class
ReturnDigest
extends
Thread
{
private
String
filename
;
private
byte
[]
digest
;
public
ReturnDigest
(
String
filename
)
{
this
.
filename
=
filename
;
}
@Override
public
void
run
()
{
try
{
FileInputStream
in
=
new
FileInputStream
(
filename
);
MessageDigest
sha
=
MessageDigest
.
getInstance
(
"SHA-256"
);
DigestInputStream
din
=
new
DigestInputStream
(
in
,
sha
);
while
(
din
.
read
()
!=
-
1
)
;
// read entire file
din
.
close
();
digest
=
sha
.
digest
();
}
catch
(
IOException
ex
)
{
System
.
err
.
println
(
ex
);
}
catch
(
NoSuchAlgorithmException
ex
)
{
System
.
err
.
println
(
ex
);
}
}
public
byte
[]
getDigest
()
{
return
digest
;
}
}
import
javax.xml.bind.*
;
// for DatatypeConverter
public
class
ReturnDigestUserInterface
{
public
static
void
main
(
String
[]
args
)
{
for
(
String
filename
:
args
)
{
// Calculate the digest
ReturnDigest
dr
=
new
ReturnDigest
(
filename
);
dr
.
start
();
// Now print the result
StringBuilder
result
=
new
StringBuilder
(
filename
);
result
.
append
(
": "
);
byte
[]
digest
=
dr
.
getDigest
();
result
.
append
(
DatatypeConverter
.
printHexBinary
(
digest
));
System
.
out
.
println
(
result
);
}
}
}
The ReturnDigest
class stores the result of the calculation in the private field digest
, which is accessed via getDigest()
. The main()
method in ReturnDigestUserInterface
loops through a list of files from the command line. It starts a new ReturnDigest
thread for each file and then tries to retrieve the result using getDigest()
. However, when you run this program, the result may not be what you expect:
D:\JAVA\JNP4\examples\03>java ReturnDigestUserInterface *.java
Exception in thread "main" java.lang.NullPointerException
at javax.xml.bind.DatatypeConverterImpl.printHexBinary
(DatatypeConverterImpl.java:358)
at javax.xml.bind.DatatypeConverter.printHexBinary(DatatypeConverter.java:560)
at ReturnDigestUserInterface.main(ReturnDigestUserInterface.java:15)
The problem is that the main program gets the digest and uses it before the thread has had a chance to initialize it. Although this flow of control would work in a single-threaded program in which dr.start()
simply invoked the run()
method in the same thread, that’s not what happens here. The calculations that dr.start()
kicks off may or may not finish before the main()
method reaches the call to dr.getDigest()
. If they haven’t finished, dr.getDigest()
returns null
, and the first attempt to access digest
throws a NullPointerException
.
One possibility is to move the call to dr.getDigest()
later in the main()
method, like this:
public
static
void
main
(
String
[]
args
)
{
ReturnDigest
[]
digests
=
new
ReturnDigest
[
args
.
length
];
for
(
int
i
=
0
;
i
<
args
.
length
;
i
++)
{
// Calculate the digest
digests
[
i
]
=
new
ReturnDigest
(
args
[
i
]);
digests
[
i
].
start
();
}
for
(
int
i
=
0
;
i
<
args
.
length
;
i
++)
{
// Now print the result
StringBuffer
result
=
new
StringBuffer
(
args
[
i
]);
result
.
append
(
": "
);
byte
[]
digest
=
digests
[
i
].
getDigest
();
result
.
append
(
DatatypeConverter
.
printHexBinary
(
digest
));
System
.
out
.
println
(
result
);
}
}
If you’re lucky, this will work and you’ll get the expected output, like this:
D:\JAVA\JNP4\examples\03>java ReturnDigest2 *.java
AccumulatingError.java: 7B261F7D88467A1D30D66DD29EEEDE495EA16FCD3ADDB8B613BC2C5DC
BenchmarkScalb.java: AECE2AD497F11F672184E45F2885063C99B2FDD41A3FC7C7B5D4ECBFD2B0
CanonicalPathComparator.java: FE0AACF55D331BBF555528A876C919EAD826BC79B659C489D62
Catenary.java: B511A9A507B43C9CDAF626D5B3A8CCCD80149982196E66ED1BFFD5E55B11E226
...
However, let me emphasize that point about being lucky. You may not get this output. In fact, you may still get a NullPointerException
. Whether this code works is completely dependent on whether every one of the ReturnDigest
threads finishes before its getDigest()
method is called. If the first for
loop is too fast and the second for
loop is entered before the threads spawned by the first loop start finishing, you’re back where you started. Worse yet, the program may appear to hang with no output at all, not even a stack trace.
Whether you get the correct results, an exception, or a hung program depends on many factors, including how many threads the program spawns, the speed of the CPU and disk on the system where this is run, how many CPUs the system uses, and the algorithm the Java virtual machine uses to allot time to different threads. This is called a race condition. Getting the correct result depends on the relative speeds of different threads, and you can’t control those! You need a better way to guarantee that the getDigest()
method isn’t called until the digest is ready.
The solution most novices adopt is to make the getter method return a flag value (or perhaps throw an exception) until the result field is set. Then the main thread periodically polls the getter method to see whether it’s returning something other than the flag value. In this example, that would mean repeatedly testing whether the digest is null and using it only if it isn’t. For example:
public
static
void
main
(
String
[]
args
)
{
ReturnDigest
[]
digests
=
new
ReturnDigest
[
args
.
length
];
for
(
int
i
=
0
;
i
<
args
.
length
;
i
++)
{
// Calculate the digest
digests
[
i
]
=
new
ReturnDigest
(
args
[
i
]);
digests
[
i
].
start
();
}
for
(
int
i
=
0
;
i
<
args
.
length
;
i
++)
{
while
(
true
)
{
// Now print the result
byte
[]
digest
=
digests
[
i
].
getDigest
();
if
(
digest
!=
null
)
{
StringBuilder
result
=
new
StringBuilder
(
args
[
i
]);
result
.
append
(
": "
);
result
.
append
(
DatatypeConverter
.
printHexBinary
(
digest
));
System
.
out
.
println
(
result
);
break
;
}
}
}
}
This solution may work. If it works at all, it gives the correct answers in the correct order irrespective of how fast the individual threads run relative to each other. However, it’s doing a lot more work than it needs to.
Worse yet, this solution is not guaranteed to work. On some virtual machines, the main thread takes all the time available and leaves no time for the actual worker threads. The main thread is so busy checking for job completion that there’s no time left to actually complete the job! Clearly this isn’t a good approach.
In fact, there’s a much simpler, more efficient way to handle the problem. The infinite loop that repeatedly polls each ReturnDigest
object to see whether it’s finished can be eliminated. The trick is that rather than having the main program repeatedly ask each ReturnDigest
thread whether it’s finished (like a five-year-old repeatedly asking, “Are we there yet?” on a long car trip, and almost as annoying), you let the thread tell the main program when it’s finished. It does this by invoking a method in the main class that started it. This is called a callback because the thread calls its creator back when it’s done. This way, the main program can go to sleep while waiting for the threads to finish and not steal time from the running threads.
When the thread’s run()
method is nearly done, the last thing it does is invoke a known method in the main program with the result. Rather than the main program asking each thread for the answer, each thread tells the main program the answer. For instance, Example 3-5 shows a CallbackDigest
class that is much the same as before. However, at the end of the run()
method, it passes off the digest
to the static CallbackDigestUserInterface.receiveDigest()
method in the class that originally started the thread.
import
java.io.*
;
import
java.security.*
;
public
class
CallbackDigest
implements
Runnable
{
private
String
filename
;
public
CallbackDigest
(
String
filename
)
{
this
.
filename
=
filename
;
}
@Override
public
void
run
()
{
try
{
FileInputStream
in
=
new
FileInputStream
(
filename
);
MessageDigest
sha
=
MessageDigest
.
getInstance
(
"SHA-256"
);
DigestInputStream
din
=
new
DigestInputStream
(
in
,
sha
);
while
(
din
.
read
()
!=
-
1
)
;
// read entire file
din
.
close
();
byte
[]
digest
=
sha
.
digest
();
CallbackDigestUserInterface
.
receiveDigest
(
digest
,
filename
);
}
catch
(
IOException
ex
)
{
System
.
err
.
println
(
ex
);
}
catch
(
NoSuchAlgorithmException
ex
)
{
System
.
err
.
println
(
ex
);
}
}
}
The CallbackDigestUserInterface
class shown in Example 3-6 provides the main()
method. However, unlike the main()
methods in the other variations of this program, this one only starts the threads for the files named on the command line. It does not attempt to actually read, print out, or in any other way work with the results of the calculation. Those functions are handled by a separate method, receiveDigest()
, which is not invoked by the main()
method or by any method that can be reached by following the flow of control from the main()
method. Instead, it is invoked by each thread separately. That is, receiveDigest()
runs inside the digesting threads rather than inside the main thread of execution.
import
javax.xml.bind.*
;
// for DatatypeConverter; requires Java 6 or JAXB 1.0
public
class
CallbackDigestUserInterface
{
public
static
void
receiveDigest
(
byte
[]
digest
,
String
name
)
{
StringBuilder
result
=
new
StringBuilder
(
name
);
result
.
append
(
": "
);
result
.
append
(
DatatypeConverter
.
printHexBinary
(
digest
));
System
.
out
.
println
(
result
);
}
public
static
void
main
(
String
[]
args
)
{
for
(
String
filename
:
args
)
{
// Calculate the digest
CallbackDigest
cb
=
new
CallbackDigest
(
filename
);
Thread
t
=
new
Thread
(
cb
);
t
.
start
();
}
}
}
Examples 3-5 and 3-6 use static methods for the callback so that CallbackDigest
only needs to know the name of the method in CallbackDigestUserInterface
to call. However, it’s not much harder (and it’s considerably more common) to call back to an instance method. In this case, the class making the callback must have a reference to the object it’s calling back. Generally, this reference is provided as an argument to the thread’s constructor. When the run()
method is nearly done, the last thing it does is invoke the instance method on the callback object to pass along the result. For instance, Example 3-7 shows a CallbackDigest
class that is much the same as before. However, it now has one additional field, an InstanceCallbackDigestUserInterface
object called callback
. At the end of the run()
method, the digest
is passed to callback
’s receiveDigest()
method. The InstanceCallbackDigestUserInterface
object itself is set in the constructor.
import
java.io.*
;
import
java.security.*
;
public
class
InstanceCallbackDigest
implements
Runnable
{
private
String
filename
;
private
InstanceCallbackDigestUserInterface
callback
;
public
InstanceCallbackDigest
(
String
filename
,
InstanceCallbackDigestUserInterface
callback
)
{
this
.
filename
=
filename
;
this
.
callback
=
callback
;
}
@Override
public
void
run
()
{
try
{
FileInputStream
in
=
new
FileInputStream
(
filename
);
MessageDigest
sha
=
MessageDigest
.
getInstance
(
"SHA-256"
);
DigestInputStream
din
=
new
DigestInputStream
(
in
,
sha
);
while
(
din
.
read
()
!=
-
1
)
;
// read entire file
din
.
close
();
byte
[]
digest
=
sha
.
digest
();
callback
.
receiveDigest
(
digest
);
}
catch
(
IOException
|
NoSuchAlgorithmException
ex
)
{
System
.
err
.
println
(
ex
);
}
}
}
The InstanceCallbackDigestUserInterface
class shown in Example 3-8 holds the main()
method as well as the receiveDigest()
method used to handle an incoming digest. Example 3-8 just prints out the digest, but a more expansive class could do other things as well, such as storing the digest in a field, using it to start another thread, or performing further calculations on it.
import
javax.xml.bind.*
;
// for DatatypeConverter; requires Java 6 or JAXB 1.0
public
class
InstanceCallbackDigestUserInterface
{
private
String
filename
;
private
byte
[]
digest
;
public
InstanceCallbackDigestUserInterface
(
String
filename
)
{
this
.
filename
=
filename
;
}
public
void
calculateDigest
()
{
InstanceCallbackDigest
cb
=
new
InstanceCallbackDigest
(
filename
,
this
);
Thread
t
=
new
Thread
(
cb
);
t
.
start
();
}
void
receiveDigest
(
byte
[]
digest
)
{
this
.
digest
=
digest
;
System
.
out
.
println
(
this
);
}
@Override
public
String
toString
()
{
String
result
=
filename
+
": "
;
if
(
digest
!=
null
)
{
result
+=
DatatypeConverter
.
printHexBinary
(
digest
);
}
else
{
result
+=
"digest not available"
;
}
return
result
;
}
public
static
void
main
(
String
[]
args
)
{
for
(
String
filename
:
args
)
{
// Calculate the digest
InstanceCallbackDigestUserInterface
d
=
new
InstanceCallbackDigestUserInterface
(
filename
);
d
.
calculateDigest
();
}
}
}
Using instance methods instead of static methods for callbacks is a little more complicated but has a number of advantages. First, each instance of the main class (InstanceCallbackDigestUserInterface
, in this example) maps to exactly one file and can keep track of information about that file in a natural way without needing extra data structures. Furthermore, the instance can easily recalculate the digest for a particular file, if necessary. In practice, this scheme proves a lot more flexible. However, there is one caveat. Notice the addition of the calculateDigest()
method to start the thread. You might logically think that this belongs in a constructor. However, starting threads in a constructor is dangerous, especially threads that will call back to the originating object. There’s a race condition here that may allow the new thread to call back before the constructor is finished and the object is fully initialized. It’s unlikely in this case, because starting the new thread is the last thing this constructor does. Nonetheless, it’s at least theoretically possible. Therefore, it’s good form to avoid launching threads from constructors.
The first advantage of the callback scheme over the polling scheme is that it doesn’t waste so many CPU cycles. However, a much more important advantage is that callbacks are more flexible and can handle more complicated situations involving many more threads, objects, and classes. For instance, if more than one object is interested in the result of the thread’s calculation, the thread can keep a list of objects to call back. Particular objects can register their interest by invoking a method in the Thread
or Runnable
class to add themselves to the list. If instances of more than one class are interested in the result, a new interface
can be defined that all these classes implement. The interface
would declare the callback methods.
If you’re experiencing déjà vu right now, that’s probably because you have seen this scheme before. This is exactly how events are handled in Swing, the AWT, and JavaBeans. The AWT runs in a separate thread from the rest of the program. Components and beans inform you of events by calling back to methods declared in particular interfaces, such as ActionListener
and PropertyChangeListener
. Your listener objects register their interests in events fired by particular components using methods in the Component
class, such as addActionListener()
and addPropertyChangeListener()
. Inside the component, the registered listeners are stored in a linked list built out of java.awt.AWTEventMulticaster
objects. More generally this is known as the Observer design pattern.
Java 5 introduced a new approach to multithreaded programming that makes it somewhat easier to handle callbacks by hiding the details. Instead of directly creating a thread, you create an ExecutorService
that will create threads for you as needed. You submit Callable
jobs to the ExecutorService
and for each one you get back a Future
. At a later point, you can ask the Future
for the result of the job. If the result is ready, you get it immediately. If it’s not ready, the polling thread blocks until it is ready. The advantage is that you can spawn off many different threads, then get the answers you need in the order you need them.
For example, suppose you need to find the maximum value in a large array of numbers. Implemented naively, this takes O(n) time where n is the number of elements in the array. However, you can go faster than that if you split the work into multiple threads, each running on a separate core. For purposes of illustration, let’s assume two threads are desired.
The Callable
interface defines a single call()
method that can generically return any type. Example 3-9 is a Callable
that finds the maximum value in a subsection of an array in the most obvious way possible.
import
java.util.concurrent.Callable
;
class
FindMaxTask
implements
Callable
<
Integer
>
{
private
int
[]
data
;
private
int
start
;
private
int
end
;
FindMaxTask
(
int
[]
data
,
int
start
,
int
end
)
{
this
.
data
=
data
;
this
.
start
=
start
;
this
.
end
=
end
;
}
public
Integer
call
()
{
int
max
=
Integer
.
MIN_VALUE
;
for
(
int
i
=
start
;
i
<
end
;
i
++)
{
if
(
data
[
i
]
>
max
)
max
=
data
[
i
];
}
return
max
;
}
}
Although you could invoke the call()
method directly, that is not its purpose. Instead, you submit Callable
objects to an Executor
that spins up a thread for each one. (There are other strategies an Executor
could use—for instance, it could use a single thread to invoke the callables in order—but one thread per callable is a good strategy for this problem.) Example 3-10 demonstrates.
import
java.util.concurrent.*
;
public
class
MultithreadedMaxFinder
{
public
static
int
max
(
int
[]
data
)
throws
InterruptedException
,
ExecutionException
{
if
(
data
.
length
==
1
)
{
return
data
[
0
];
}
else
if
(
data
.
length
==
0
)
{
throw
new
IllegalArgumentException
();
}
// split the job into 2 pieces
FindMaxTask
task1
=
new
FindMaxTask
(
data
,
0
,
data
.
length
/
2
);
FindMaxTask
task2
=
new
FindMaxTask
(
data
,
data
.
length
/
2
,
data
.
length
);
// spawn 2 threads
ExecutorService
service
=
Executors
.
newFixedThreadPool
(
2
);
Future
<
Integer
>
future1
=
service
.
submit
(
task1
);
Future
<
Integer
>
future2
=
service
.
submit
(
task2
);
return
Math
.
max
(
future1
.
get
(),
future2
.
get
());
}
}
Each subarray is searched at the same time, so on suitable hardware and a large input this program can run almost twice as fast. Nonetheless, the code is almost as simple and straightforward as finding the maximum in the first half of the array and then finding the maximum in the second half of the array, without ever worrying about threads or asynchronicity. However, there’s one key difference. In the last statement of Example 3-10, when future1.get()
is called, the method blocks and waits for the first FindMaxTask
to finish. Only when this has happened does it call future2.get()
. It’s possible that the second thread has already finished, in which case the value is immediately returned; but if not, execution waits for that thread to finish too. Once both threads have finished, their results are compared and the maximum is returned.
Futures are a very convenient means of launching multiple threads to work on different pieces of a problem, and then waiting for them all to finish before proceeding. Executors and executor services let you assign jobs to different threads with different strategies. This example used just two threads once each, but it’s possible to use many more threads, and to reuse threads for multiple tasks. Executors hide a lot of the nitty-gritty details of asynchronicity as long as you can divide your job up into reasonably independent parts.
My shelves are overflowing with books, including many duplicate books, out-of-date books, and books I haven’t looked at for 10 years and probably never will again. Over the years, these books have cost me tens of thousands of dollars, maybe more, to acquire. By contrast, two blocks down the street from my apartment, you’ll find the Central Brooklyn Public Library. Its shelves are also overflowing with books, and over its 100 years, it’s spent millions on its collection. But the difference is that its books are shared among all the residents of Brooklyn, and consequently the books have very high turnover. Most books in the collection are used several times a year. Although the public library spends a lot more money buying and storing books than I do, the cost per page read is much lower at the library than for my personal shelves. That’s the advantage of a shared resource.
Of course, there are disadvantages to shared resources, too. If I need a book from the library, I have to walk over there. I have to find the book I’m looking for on the shelves. I have to stand in line to check the book out, or else I have to use it right there in the library rather than bringing it home with me. Sometimes somebody else has checked the book out, and I have to fill out a form requesting that the book be saved for me when it’s returned. And I can’t write notes in the margins, highlight paragraphs, or tear pages out to paste on my bulletin board. (Well, I can, but if I do, it significantly reduces the usefulness of the book for future borrowers, and if the library catches me, I may lose my borrowing privileges.) There’s a significant time and convenience penalty associated with borrowing a book from the library rather than purchasing my own copy, but it does save me money and storage space.
A thread is like a borrower at a library; the thread borrows from a central pool of resources. Threads make programs more efficient by sharing memory, file handles, sockets, and other resources. As long as two threads don’t want to use the same resource at the same time, a multithreaded program is much more efficient than the multiprocess alternative, in which each process has to keep its own copy of every resource. The downside of a multithreaded program is that if two threads want the same resource at the same time, one of them will have to wait for the other to finish. If one of them doesn’t wait, the resource may get corrupted. Let’s look at a specific example. Consider the run()
method of Examples 3-1 and 3-2. As previously mentioned, the method builds the result as a String
, and then prints the String
on the console using one call to System.out.println()
. The output looks like this:
Triangle.java: B4C7AF1BAE952655A96517476BF9DAC97C4AF02411E40DD386FECB58D94CC769 InterfaceLister.java: 267D0EFE73896CD550DC202935D20E87CA71536CB176AF78F915935A6 Squares.java: DA2E27EA139785535122A2420D3DB472A807841D05F6C268A43695B9FDFE1B11 UlpPrinter.java: C8009AB1578BF7E730BD2C3EADA54B772576E265011DF22D171D60A1881AFF51
Four threads run in parallel to produce this output. Each writes one line to the console. The order in which the lines are written is unpredictable because thread scheduling is unpredictable, but each line is written as a unified whole. Suppose, however, you used this variation of the run()
method, which, rather than storing intermediate parts of the result in the String
variable result
, simply prints them on the console as they become available:
@Override
public
void
run
()
{
try
{
FileInputStream
in
=
new
FileInputStream
(
filename
);
MessageDigest
sha
=
MessageDigest
.
getInstance
(
"SHA-256"
);
DigestInputStream
din
=
new
DigestInputStream
(
in
,
sha
);
while
(
din
.
read
()
!=
-
1
)
;
// read entire file
din
.
close
();
byte
[]
digest
=
sha
.
digest
();
System
.
out
.
(
input
+
": "
);
System
.
out
.
(
DatatypeConverter
.
printHexBinary
(
digest
));
System
.
out
.
println
();
}
catch
(
IOException
ex
)
{
System
.
err
.
println
(
ex
);
}
catch
(
NoSuchAlgorithmException
ex
)
{
System
.
err
.
println
(
ex
);
}
}
When you run the program on the same input, the output looks something like this:
Triangle.java: B4C7AF1BAE952655A96517476BF9DAC97C4AF02411E40DD386FECB58D94CC769 InterfaceLister.java: Squares.java: UlpPrinter.java: C8009AB1578BF7E730BD2C3EADA54B772576E265011DF22D171D60A1881AFF51 267D0EFE73896CD550DC202935D20E87CA71536CB176AF78F915935A6E81B034 DA2E27EA139785535122A2420D3DB472A807841D05F6C268A43695B9FDFE1B11
The digests of the different files are all mixed up! There’s no telling which number belongs to which digest. Clearly, this is a problem.
The reason this mix-up occurs is that System.out
is shared between the four different threads. When one thread starts writing to the console through several System.out.print()
statements, it may not finish all its writes before another thread breaks in and starts writing its output. The exact order in which one thread preempts the other threads is indeterminate. You’ll probably see slightly different output every time you run this program.
You need a way to assign exclusive access to a shared resource to one thread for a specific series of statements. In this example, that shared resource is System.out
, and the statements that need exclusive access are:
System
.
out
.
(
input
+
": "
);
System
.
out
.
(
DatatypeConverter
.
printHexBinary
(
digest
));
System
.
out
.
println
();
To indicate that these five lines of code should be executed together, wrap them in a synchronized
block that synchronizes on the System.out
object, like this:
synchronized
(
System
.
out
)
{
System
.
out
.
(
input
+
": "
);
System
.
out
.
(
DatatypeConverter
.
printHexBinary
(
digest
));
System
.
out
.
println
();
}
Once one thread starts printing out the values, all other threads will have to stop and wait for it to finish before they can print out their values. Synchronization forces all code that synchronizes on the same object to run in series, never in parallel. For instance, if some other code in a different class and different thread also happened to synchronize on System.out
, it too would not be able to run in parallel with this block. However, other code that synchronizes on a different object or doesn’t synchronize at all can still run in parallel with this code. It can do so even if it also uses System.out
. Java provides no means to stop all other threads from using a shared resource. It can only prevent other threads that synchronize on the same object from using the shared resource.
In fact, the PrintStream
class internally synchronizes most methods on the PrintStream
object (System.out
, in this example). In other words, every other thread that calls System.out.println()
will be synchronized on System.out
and will have to wait for this code to finish. PrintStream
is unique in this respect. Most other OutputStream
subclasses do not synchronize themselves.
Synchronization must be considered any time multiple threads share resources. These threads may be instances of the same Thread
subclass or use the same Runnable
class, or they may be instances of completely different classes. The key is the resources they share, not what classes they are. Synchronization becomes an issue only when two threads both possess references to the same object. In the previous example, the problem was that several threads had access to the same PrintStream
object, System.out
. In this case, it was a static class variable that led to the conflict. However, instance variables can also have problems.
For example, suppose your web server keeps a logfile. The logfile may be represented by a class like the one shown in Example 3-11. This class itself doesn’t use multiple threads. However, if the web server uses multiple threads to handle incoming connections, then each of those threads will need access to the same logfile and consequently to the same LogFile
object.
import
java.io.*
;
import
java.util.*
;
public
class
LogFile
{
private
Writer
out
;
public
LogFile
(
File
f
)
throws
IOException
{
FileWriter
fw
=
new
FileWriter
(
f
);
this
.
out
=
new
BufferedWriter
(
fw
);
}
public
void
writeEntry
(
String
message
)
throws
IOException
{
Date
d
=
new
Date
();
out
.
write
(
d
.
toString
());
out
.
write
(
'\t'
);
out
.
write
(
message
);
out
.
write
(
"\r\n"
);
}
public
void
close
()
throws
IOException
{
out
.
flush
();
out
.
close
();
}
}
In this class, the writeEntry()
method finds the current date and time, then writes into the underlying file using four separate invocations of out.write()
. A problem occurs if two or more threads each have a reference to the same LogFile
object and one of those threads interrupts another in the process of writing the data. One thread may write the date and a tab, then the next thread might write three complete entries; then, the first thread could write the message, a carriage return, and a linefeed. The solution, once again, is synchronization. However, here there are two good choices for which object to synchronize on. The first choice is to synchronize on the Writer
object out
. For example:
public
void
writeEntry
(
String
message
)
throws
IOException
{
synchronized
(
out
)
{
Date
d
=
new
Date
();
out
.
write
(
d
.
toString
());
out
.
write
(
'\t'
);
out
.
write
(
message
);
out
.
write
(
"\r\n"
);
}
}
This works because all the threads that use this LogFile
object also use the same out
object that’s part of that LogFile
. It doesn’t matter that out
is private. Although it is used by the other threads and objects, it’s referenced only within the LogFile
class. Furthermore, although you’re synchronizing here on the out
object, it’s the writeEntry()
method that needs to be protected from interruption. The Writer
classes all have their own internal synchronization, which protects one thread from interfering with a write()
method in another thread. (This is not true of input and output streams, with the exception of PrintStream
. It is possible for a write to an output stream to be interrupted by another thread.) Each Writer
class has a lock
field that specifies the object on which writes to that writer synchronize.
The second possibility is to synchronize on the LogFile
object itself. This is simple enough to arrange with the this
keyword. For example:
public
void
writeEntry
(
String
message
)
throws
IOException
{
synchronized
(
this
)
{
Date
d
=
new
Date
();
out
.
write
(
d
.
toString
());
out
.
write
(
'\t'
);
out
.
write
(
message
);
out
.
write
(
"\r\n"
);
}
}
Because synchronizing the entire method body on the object itself is such a common thing to do, Java provides a shortcut. You can synchronize an entire method on the current object (the this
reference) by adding the synchronized
modifier to the method declaration. For example:
public
synchronized
void
writeEntry
(
String
message
)
throws
IOException
{
Date
d
=
new
Date
();
out
.
write
(
d
.
toString
());
out
.
write
(
'\t'
);
out
.
write
(
message
);
out
.
write
(
"\r\n"
);
}
Simply adding the synchronized
modifier to all methods is not a catchall solution for synchronization problems. For one thing, it exacts a severe performance penalty in many VMs (though more recent VMs have improved greatly in this respect), potentially slowing down your code by a factor of three or more. Second, it dramatically increases the chances of deadlock. Third, and most importantly, it’s not always the object itself you need to protect from simultaneous modification or access, and synchronizing on the instance of the method’s class may not protect the object you really need to protect. For instance, in this example, what you’re really trying to prevent is two threads simultaneously writing onto out
. If some other class had a reference to out
completely unrelated to the LogFile
, this attempt would fail. However, in this example, synchronizing on the LogFile
object is sufficient because out
is a private instance variable. Because you never expose a reference to this object, there’s no way for any other object to invoke its methods except through the LogFile
class. Therefore, synchronizing on the LogFile
object has the same effect as synchronizing on out
.
Synchronization is not always the best solution to the problem of inconsistent behavior caused by thread scheduling. There are a number of techniques that avoid the need for synchronization entirely. The first is to use local variables instead of fields wherever possible. Local variables do not have synchronization problems. Every time a method is entered, the virtual machine creates a completely new set of local variables for the method. These variables are invisible from outside the method and are destroyed when the method exits. As a result, it’s impossible for one local variable to be shared by two different threads. Every thread has its own separate set of local variables.
Method arguments of primitive types are also safe from modification in separate threads because Java passes arguments by value rather than by reference. A corollary of this is that pure functions such as Math.sqrt()
that take zero or more primitive data type arguments, perform some calculation, and return a value without ever interacting with the fields of any class are inherently thread safe. These methods often either are or should be declared static.
Method arguments of object types are a little trickier because the actual argument passed by value is a reference to the object. Suppose, for example, you pass a reference to an array into a sort()
method. While the method is sorting the array, there’s nothing to stop some other thread that also has a reference to the array from changing the values in the array.
String
arguments are safe because they’re immutable (i.e., once a String
object has been created, it cannot be changed by any thread). An immutable object never changes state. The values of its fields are set once when the constructor runs and never altered thereafter. StringBuilder
arguments are not safe because they’re not immutable; they can be changed after they’re created.
A constructor normally does not have to worry about issues of thread safety. Until the constructor returns, no thread has a reference to the object, so it’s impossible for two threads to have a reference to the object. (The most likely issue is if a constructor depends on another object in another thread that may change while the constructor runs, but that’s uncommon. There’s also a potential problem if a constructor somehow passes a reference to the object it’s creating into a different thread, but this is also uncommon.)
You can take advantage of immutability in your own classes. It’s usually the easiest way to make a class thread safe, often much easier than determining exactly which methods or code blocks to synchronize. To make an object immutable, simply declare all its fields private and final and don’t write any methods that can change them. A lot of classes in the core Java library are immutable (e.g., java.lang.String
, java.lang.Integer
, java.lang.Double
, and many more). This makes these classes less useful for some purposes, but it does make them a lot more thread safe.
A third technique is to use a thread-unsafe class but only as a private field of a class that is thread safe. As long as the containing class accesses the unsafe class only in a thread-safe fashion and as long as it never lets a reference to the private field leak out into another object, the class is safe. An example of this technique might be a web server that uses an unsynchronized LogFile
class but gives each separate thread its own separate log so no resources are shared between the individual threads.
In some cases, you can use a designedly thread-safe but mutable class from the java.util.concurrent.atomic
package. In particular, rather than using an int
, you can use an AtomicInteger
. Rather than using a long
, you can use an AtomicLong
. Rather than using a boolean
, you can use an AtomicBoolean
. Rather than using an int[]
, you can use an AtomicIntegerArray
. Rather than a reference variable, you can store an object inside an AtomicReference
, though note well that this doesn’t make the object itself thread safe, just the getting and setting of the reference variable. These classes may be faster than synchronized access to their respective primitive types if they can take advantage of fast machine-level thread-safe instructions on modern CPUs.
For collections such as maps and lists, you can wrap them in a thread-safe version using the methods of java.util.Collections
. For instance, if you have a set foo
, you can get a thread-safe view of this set with Collections.synchronizedSet(foo)
. If you have a list foo
, you’d use Collections.synchronizedList(foo)
instead. For a map, call Collections.synchronizedMap(foo)
, and so forth. In order for this to work, you must use only the view returned by Collections.synchronizedSet/List/Map
. If at any point you access the original, underlying data structure, neither the original nor the synchronized view will be thread safe.
In all cases, realize that it’s just a single method invocation that is atomic. If you need to perform two operations on the atomic value in succession without possible interruption, you’ll still need to synchronize. Thus, for instance, even if a list is synchronized via Collections.synchronizedList()
, you’ll still need to synchronize on it if you want to iterate through the list because that involves many consecutive atomic operations. Although each method call is safely atomic, the sequence of operations is not without explicit synchronization.
Synchronization can lead to another possible problem: deadlock. Deadlock occurs when two threads need exclusive access to the same set of resources and each thread holds the lock on a different subset of those resources. If neither thread is willing to give up the resources it has, both threads come to an indefinite halt. This isn’t quite a hang in the classical sense because the program is still active and behaving normally from the perspective of the OS, but to a user the difference is insignificant.
To return to the library example, deadlock is what occurs when Jack and Jill are each writing a term paper on Thomas Jefferson and they both need the two books Thomas Jefferson and Sally Hemings: An American Controversy and Sally Hemings and Thomas Jefferson: History, Memory, and Civic Culture. If Jill has checked out the first book and Jack has checked out the second, and neither is willing to give up the book they have, neither can finish the paper. Eventually, the deadline expires and they both get an F. That’s the problem of deadlock.
Worse yet, deadlock can be a sporadic, hard-to-detect bug. Deadlock usually depends on unpredictable issues of timing. Most of the time, either Jack or Jill will get to the library first and check out both books. In this case, the one who gets the books first writes a paper and returns the books; then the other one gets the books and writes his or her paper. Only rarely will they arrive at the same time and each get one of the two books. Ninety-nine times out of 100 or 999 times out of 1,000, a program will run to completion perfectly normally. Only rarely will it hang for no apparent reason. Of course, if a multithreaded server is handling hundreds of requests per second, even a problem that occurs only once every million requests can hang the server in short order.
The most important technique for preventing deadlock is to avoid unnecessary synchronization. If there’s an alternative approach for ensuring thread safety, such as making objects immutable or keeping a local copy of an object, use it. Synchronization should be a last resort for ensuring thread safety. If you do need to synchronize, keep the synchronized blocks small and try not to synchronize on more than one object at a time. This can be tricky, though, because many of the methods from the Java class library that your code may invoke synchronize on objects you aren’t aware of. Consequently, you may in fact be synchronizing on many more objects than you expect.
The best you can do in the general case is carefully consider whether deadlock is likely to be a problem and design your code around it. If multiple objects need the same set of shared resources to operate, make sure they request them in the same order. For instance, if class A and class B need exclusive access to object X and object Y, make sure that both classes request X first and Y second. If neither requests Y unless it already possesses X, deadlock is not a problem.
When multiple threads are running at the same time (more properly, when multiple threads are available to be run at the same time), you have to consider issues of thread scheduling. You need to make sure that all important threads get at least some time to run and that the more important threads get more time. Furthermore, you want to ensure that the threads execute in a reasonable order. If your web server has 10 queued requests, each of which requires 5 seconds to process, you don’t want to process them in series. If you do, the first request will finish in 5 seconds but the second will take 10, the third 15, and so on until the last request, which will have to wait almost a minute to be serviced. By that point, the user has likely gone to another page. By running threads in parallel, you might be able to process all 10 requests in only 10 seconds total. The reason this strategy works is that there’s a lot of dead time in servicing a typical web request, time in which the thread is simply waiting for the network to catch up with the CPU—time the VM’s thread scheduler can put to good use by other threads. However, CPU-bound threads (as opposed to the I/O-bound threads more common in network programs) may never reach a point where they have to wait for more input. It is possible for such a thread to starve all other threads by taking all the available CPU resources. With a little thought, you can avoid this problem. In fact, starvation is a considerably easier problem to avoid than either mis-synchronization or deadlock.
Not all threads are created equal. Each thread has a priority, specified as an integer from 0 to 10. When multiple threads are ready to run, the VM will generally run only the highest-priority thread, although that’s not a hard-and-fast rule. In Java, 10 is the highest priority and 0 is the lowest. The default priority is 5, and this is the priority that your threads will have unless you deliberately set them otherwise.
This is the exact opposite of the normal Unix way of prioritizing processes, in which the higher the priority number of a process, the less CPU time the process gets.
These three priorities (1, 5, and 10) are often specified as the three named constants Thread.MIN_PRIORITY
, Thread.NORM_PRIORITY
, and Thread.MAX_PRIORITY
:
public
static
final
int
MIN_PRIORITY
=
1
;
public
static
final
int
NORM_PRIORITY
=
5
;
public
static
final
int
MAX_PRIORITY
=
10
;
Not all operating systems support 11 different priorities. For instance, Windows only has 7. On Windows, priorities 1 and 2, 3 and 4, 6 and 7, and 8 and 9 are treated equally (e.g., a priority 9 thread will not necessarily preempt a priority 8 thread).
Sometimes you want to give one thread more time than another. Threads that interact with the user should get very high priorities so that perceived responsiveness will be very quick. On the other hand, threads that calculate in the background should get low priorities. Tasks that will complete quickly should have high priorities. Tasks that take a long time should have low priorities so that they won’t get in the way of other tasks. You can change the priority of the thread using the setPriority()
method:
public
final
void
setPriority
(
int
newPriority
)
Attempting to exceed the maximum priority or set a nonpositive priority throws an IllegalArgumentException
.
For instance, in Example 3-11, you might want to give higher priorities to the threads that do the calculating than the main program that spawns the threads. This is easily achieved by changing the calculateDigest()
method to set the priority of each spawned thread to 8:
public
void
calculateDigest
()
{
ListCallbackDigest
cb
=
new
ListCallbackDigest
(
filename
);
cb
.
addDigestListener
(
this
);
Thread
t
=
new
Thread
(
cb
);
t
.
setPriority
(
8
);
t
.
start
();
}
In general, though, try to avoid using too high a priority for threads, because you run the risk of starving other, lower-priority threads.
Every virtual machine has a thread scheduler that determines which thread to run at any given time. There are two main kinds of thread scheduling: preemptive and cooperative. A preemptive thread scheduler determines when a thread has had its fair share of CPU time, pauses that thread, and then hands off control of the CPU to a different thread. A cooperative thread scheduler waits for the running thread to pause itself before handing off control of the CPU to a different thread. A virtual machine that uses cooperative thread scheduling is much more susceptible to thread starvation than a virtual machine that uses preemptive thread scheduling, because one high-priority, uncooperative thread can hog an entire CPU.
All Java virtual machines are guaranteed to use preemptive thread scheduling between priorities. That is, if a lower-priority thread is running when a higher-priority thread becomes ready to run, the virtual machine will sooner or later (and probably sooner) pause the lower-priority thread to allow the higher-priority thread to run. The higher-priority thread preempts the lower-priority thread.
The situation when multiple threads of the same priority are ready to run is trickier. A preemptive thread scheduler will occasionally pause one of the threads to allow the next one in line to get some CPU time. However, a cooperative thread scheduler will not. It will wait for the running thread to explicitly give up control or come to a stopping point. If the running thread never gives up control and never comes to a stopping point and if no higher-priority threads preempt the running thread, all other threads will starve. This is a bad thing. It’s important to make sure all your threads periodically pause themselves so that other threads have an opportunity to run.
A starvation problem can be hard to spot if you’re developing on a VM that uses preemptive thread scheduling. Just because the problem doesn’t arise on your machine doesn’t mean it won’t arise on your customers’ machines if their VMs use cooperative thread scheduling. Most current VMs use preemptive thread scheduling, but some older VMs are cooperatively scheduled, and you may also encounter cooperative scheduling in special-purpose Java VMs such as for embedded environments.
There are 10 ways a thread can pause in favor of other threads or indicate that it is ready to pause. These are:
Inspect every run()
method you write to make sure that one of these conditions will occur with reasonable frequency. The last two possibilities are deprecated because they have the potential to leave objects in inconsistent states, so let’s look at the other eight ways a thread can be a cooperative citizen of the virtual machine.
Blocking occurs any time a thread has to stop and wait for a resource it doesn’t have. The most common way a thread in a network program will voluntarily give up control of the CPU is by blocking on I/O. Because CPUs are much faster than networks and disks, a network program will often block while waiting for data to arrive from the network or be sent out to the network. Even though it may block for only a few milliseconds, this is enough time for other threads to do significant work.
Threads can also block when they enter a synchronized method or block. If the thread does not already possess the lock for the object being synchronized on and some other thread does possess that lock, the thread will pause until the lock is released. If the lock is never released, the thread is permanently stopped.
Neither blocking on I/O nor blocking on a lock will release any locks the thread already possesses. For I/O blocks, this is not such a big deal, because eventually the I/O will either unblock and the thread will continue or an IOException
will be thrown and the thread will then exit the synchronized block or method and release its locks. However, a thread blocking on a lock that it doesn’t possess will never give up its own locks. If one thread is waiting for a lock that a second thread owns and the second thread is waiting for a lock that the first thread owns, deadlock results.
The second way for a thread to give up control is to explicitly yield. A thread does this by invoking the static Thread.yield()
method. This signals to the virtual machine that it can run another thread if one is ready to run. Some virtual machines, particularly on real-time operating systems, may ignore this hint.
Before yielding, a thread should make sure that it or its associated Runnable
object is in a consistent state that can be used by other objects. Yielding does not release any locks the thread holds. Therefore, ideally, a thread should not be synchronized on anything when it yields. If the only other threads waiting to run when a thread yields are blocked because they need the synchronized resources that the yielding thread possesses, then the other threads won’t be able to run. Instead, control will return to the only thread that can run, the one that just yielded, which pretty much defeats the purpose of yielding.
Making a thread yield is quite simple in practice. If the thread’s run()
method simply consists of an infinite loop, just put a call to Thread.yield()
at the end of the loop. For example:
public
void
run
()
{
while
(
true
)
{
// Do the thread's work...
Thread
.
yield
();
}
}
This gives other threads of the same priority the opportunity to run.
If each iteration of the loop takes a significant amount of time, you may want to intersperse more calls to Thread.yield()
in the rest of the code. This precaution should have minimal effect in the event that yielding isn’t necessary.
Sleeping is a more powerful form of yielding. Whereas yielding indicates only that a thread is willing to pause and let other equal-priority threads have a turn, a thread that goes to sleep will pause whether any other thread is ready to run or not. This gives an opportunity to run not only to other threads of the same priority, but also to threads of lower priorities. However, a thread that goes to sleep does hold onto all the locks it’s grabbed. Consequently, other threads that need the same locks will be blocked even if the CPU is available. Therefore, try to avoid sleeping threads inside a synchronized method or block.
A thread goes to sleep by invoking one of two overloaded static Thread.sleep()
methods. The first takes the number of milliseconds to sleep as an argument. The second takes both the number of milliseconds and the number of nanoseconds:
public
static
void
sleep
(
long
milliseconds
)
throws
InterruptedException
public
static
void
sleep
(
long
milliseconds
,
int
nanoseconds
)
throws
InterruptedException
Although most modern computer clocks have at least close-to-millisecond accuracy, nanosecond accuracy is rarer. There’s no guarantee that you can actually time the sleep to within a nanosecond or even within a millisecond on any particular virtual machine. If the local hardware can’t support that level of accuracy, the sleep time is simply rounded to the nearest value that can be measured. For example, this run()
method attempts to load a certain page every five minutes and, if it fails, emails the webmaster to alert him of the problem:
public
void
run
()
{
while
(
true
)
{
if
(!
getPage
(
"http://www.ibiblio.org/"
))
{
mailError
(
"webmaster@ibiblio.org"
);
}
try
{
Thread
.
sleep
(
300000
);
// 300,000 milliseconds == 5 minutes
}
catch
(
InterruptedException
ex
)
{
break
;
}
}
}
The thread is not guaranteed to sleep as long as it wants to. On occasion, the thread may not wake up until some time after its requested wake-up call, simply because the VM is busy doing other things. It is also possible that some other thread will do something to wake up the sleeping thread before its time. Generally, this is accomplished by invoking the sleeping thread’s interrupt()
method.
public
void
interrupt
()
This is one of those cases where the distinction between the thread and the Thread
object is important. Just because the thread is sleeping doesn’t mean that other threads that are awake can’t work with the corresponding Thread
object through its methods and fields. In particular, another thread can invoke the sleeping Thread
object’s interrupt()
method, which the sleeping thread experiences as an InterruptedException
. From that point forward, the thread is awake and executes as normal, at least until it goes to sleep again. In the previous example, an InterruptedException
is used to terminate a thread that would otherwise run forever. When the InterruptedException
is thrown, the infinite loop is broken, the run()
method finishes, and the thread dies. The user-interface thread can invoke this thread’s interrupt()
method when the user selects Exit from a menu or otherwise indicates that he wants the program to quit.
If a thread is blocked on an I/O operation such as a read or write, the effect of interrupting the thread is highly platform dependent. More often than not, it is a noop. That is, the thread continues to be blocked. On Solaris, the read()
or write()
method may throw an InterruptedIOException
, a subclass of IOException
instead. However, this is unlikely to happen on other platforms, and may not work with all stream classes on Solaris. If your program architecture requires interruptible I/O, you should seriously consider using the nonblocking I/O discussed in Chapter 11 rather than streams. Unlike streams, buffers and channels are explicitly designed to support interruption while blocked on a read or write.
It’s not uncommon for one thread to need the result of another thread. For example, a web browser loading an HTML page in one thread might spawn a separate thread to retrieve every image embedded in the page. If the IMG
elements don’t have HEIGHT
and WIDTH
attributes, the main thread might have to wait for all the images to load before it can finish by displaying the page. Java provides three join()
methods to allow one thread to wait for another thread to finish before continuing. These are:
public
final
void
join
()
throws
InterruptedException
public
final
void
join
(
long
milliseconds
)
throws
InterruptedException
public
final
void
join
(
long
milliseconds
,
int
nanoseconds
)
throws
InterruptedException
The first variant waits indefinitely for the joined thread to finish. The second two variants wait for the specified amount of time, after which they continue even if the joined thread has not finished. As with the sleep()
method, nanosecond accuracy is not guaranteed.
The joining thread (i.e., the one that invokes the join()
method) waits for the joined thread (i.e, the one whose join()
method is invoked) to finish. For instance, consider this code fragment. You want to find the minimum, maximum, and median of a random array of doubles. It’s quicker to do this with a sorted array. You spawn a new thread to sort the array, then join to that thread to await its results. Only when it’s done do you read out the desired values.
double
[]
array
=
new
double
[
10000
];
// 1
for
(
int
i
=
0
;
i
<
array
.
length
;
i
++)
{
// 2
array
[
i
]
=
Math
.
random
();
// 3
}
// 4
SortThread
t
=
new
SortThread
(
array
);
// 5
t
.
start
();
// 6
try
{
// 7
t
.
join
();
// 8
System
.
out
.
println
(
"Minimum: "
+
array
[
0
]);
// 9
System
.
out
.
println
(
"Median: "
+
array
[
array
.
length
/
2
]);
// 10
System
.
out
.
println
(
"Maximum: "
+
array
[
array
.
length
-
1
]);
// 11
}
catch
(
InterruptedException
ex
)
{
// 12
}
// 13
First lines 1 through 4 execute, filling the array with random numbers. Then line 5 creates a new SortThread
. Line 6 starts the thread that will sort the array. Before you can find the minimum, median, and maximum of the array, you need to wait for the sorting thread to finish. Therefore, line 8 joins the current thread to the sorting thread. At this point, the thread executing these lines of code stops in its tracks. It waits for the sorting thread to finish running. The minimum, median, and maximum are not retrieved in lines 9 through 10 until the sorting thread has finished running and died. Notice that at no point is there a reference to the thread that pauses. It’s not the Thread
object on which the join()
method is invoked; it’s not passed as an argument to that method. It exists implicitly only as the current thread. If this is within the normal flow of control of the main()
method of the program, there may not be any Thread
variable anywhere that points to this thread.
A thread that’s joined to another thread can be interrupted just like a sleeping thread if some other thread invokes its interrupt()
method. The thread experiences this invocation as an InterruptedException
. From that point forward, it executes as normal, starting from the catch
block that caught the exception. In the preceding example, if the thread is interrupted, it skips over the calculation of the minimum, median, and maximum because they won’t be available if the sorting thread was interrupted before it could finish.
You can use join()
to fix up Example 3-4. Its problem was that the main()
method tended to outpace the threads whose results the main()
method was using. It’s straightforward to fix this by joining to each thread before trying to use its result. Example 3-12 demonstrates.
import
javax.xml.bind.DatatypeConverter
;
public
class
JoinDigestUserInterface
{
public
static
void
main
(
String
[]
args
)
{
ReturnDigest
[]
digestThreads
=
new
ReturnDigest
[
args
.
length
];
for
(
int
i
=
0
;
i
<
args
.
length
;
i
++)
{
// Calculate the digest
digestThreads
[
i
]
=
new
ReturnDigest
(
args
[
i
]);
digestThreads
[
i
].
start
();
}
for
(
int
i
=
0
;
i
<
args
.
length
;
i
++)
{
try
{
digestThreads
[
i
].
join
();
// Now print the result
StringBuffer
result
=
new
StringBuffer
(
args
[
i
]);
result
.
append
(
": "
);
byte
[]
digest
=
digestThreads
[
i
].
getDigest
();
result
.
append
(
DatatypeConverter
.
printHexBinary
(
digest
));
System
.
out
.
println
(
result
);
}
catch
(
InterruptedException
ex
)
{
System
.
err
.
println
(
"Thread Interrupted before completion"
);
}
}
}
}
Because Example 3-12 joins to threads in the same order as the threads are started, this fix also has the side effect of printing the output in the same order as the arguments used to construct the threads, rather than in the order the threads finish. This modification doesn’t make the program any slower, but it may occasionally be an issue if you want to get the output of a thread as soon as it’s done, without waiting for other unrelated threads to finish first.
A thread can wait on an object it has locked. While waiting, it releases the lock on the object and pauses until it is notified by some other thread. Another thread changes the object in some way, notifies the thread waiting on that object, and then continues. This differs from joining in that neither the waiting nor the notifying thread has to finish before the other thread can continue. Waiting pauses execution until an object or resource reaches a certain state. Joining pauses execution until a thread finishes.
Waiting on an object is one of the lesser-known ways a thread can pause. That’s because it doesn’t involve any methods in the Thread
class. Instead, to wait on a particular object, the thread that wants to pause must first obtain the lock on the object using synchronized
and then invoke one of the object’s three overloaded wait()
methods:
public
final
void
wait
()
throws
InterruptedException
public
final
void
wait
(
long
milliseconds
)
throws
InterruptedException
public
final
void
wait
(
long
milliseconds
,
int
nanoseconds
)
throws
InterruptedException
These methods are not in the Thread
class; rather, they are in the java.lang.Object
class. Consequently, they can be invoked on any object of any class. When one of these methods is invoked, the thread that invoked it releases the lock on the object it’s waiting on (though not any locks it possesses on other objects) and goes to sleep. It remains asleep until one of three things happens:
The timeout is the same as for the sleep()
and join()
methods; that is, the thread wakes up after the specified amount of time has passed (within the limits of the local hardware clock accuracy). When the timeout expires, execution of the thread resumes with the statement immediately following the invocation of wait()
. However, if the thread can’t immediately regain the lock on the object it was waiting on, it may still be blocked for some time.
Interruption works the same way as sleep()
and join()
: some other thread invokes the thread’s interrupt()
method. This causes an InterruptedException
, and execution resumes in the catch
block that catches the exception. The thread regains the lock on the object it was waiting on before the exception is thrown, however, so the thread may still be blocked for some time after the interrupt()
method is invoked.
The third possibility, notification, is new. Notification occurs when some other thread invokes the notify()
or notifyAll()
method on the object on which the thread is waiting. Both of these methods are in the java.lang.Object
class:
public
final
void
notify
()
public
final
void
notifyAll
()
These must be invoked on the object the thread was waiting on, not generally on the Thread
itself. Before notifying an object, a thread must first obtain the lock on the object using a synchronized method or block. The notify()
method selects one thread more or less at random from the list of threads waiting on the object and wakes it up. The notifyAll()
method wakes up every thread waiting on the given object.
Once a waiting thread is notified, it attempts to regain the lock of the object it was waiting on. If it succeeds, execution resumes with the statement immediately following the invocation of wait()
. If it fails, it blocks on the object until its lock becomes available; then execution resumes with the statement immediately following the invocation of wait()
.
For example, suppose one thread is reading a JAR archive from a network connection. The first entry in the archive is the manifest file. Another thread might be interested in the contents of the manifest file even before the rest of the archive is available. The interested thread could create a custom ManifestFile
object, pass a reference to this object to the thread that would read the JAR archive, and wait on it. The thread reading the archive would first fill the ManifestFile
with entries from the stream, then notify the ManifestFile
, then continue reading the rest of the JAR archive. When the reader thread notified the ManifestFile
, the original thread would wake up and do whatever it planned to do with the now fully prepared ManifestFile
object. The first thread works something like this:
ManifestFile
m
=
new
ManifestFile
();
JarThread
t
=
new
JarThread
(
m
,
in
);
synchronized
(
m
)
{
t
.
start
();
try
{
m
.
wait
();
// work with the manifest file...
}
catch
(
InterruptedException
ex
)
{
// handle exception...
}
}
The JarThread
class works like this:
ManifestFile
theManifest
;
InputStream
in
;
public
JarThread
(
Manifest
m
,
InputStream
in
)
{
theManifest
=
m
;
this
.
in
=
in
;
}
@Override
public
void
run
()
{
synchronized
(
theManifest
)
{
// read the manifest from the stream in...
theManifest
.
notify
();
}
// read the rest of the stream...
}
Waiting and notification are more commonly used when multiple threads want to wait on the same object. For example, one thread may be reading a web server logfile in which each line contains one entry to be processed. Each line is placed in a java.util.List
as it’s read. Several threads wait on the List
to process entries as they’re added. Every time an entry is added, the waiting threads are notified using the notifyAll()
method. If more than one thread is waiting on an object, notifyAll()
is preferred, because there’s no way to select which thread to notify. When all threads waiting on one object are notified, all will wake up and try to get the lock on the object. However, only one can succeed immediately. That one continues; the rest are blocked until the first one releases the lock. If several threads are all waiting on the same object, a significant amount of time may pass before the last one gets its turn at the lock on the object and continues. It’s entirely possible that the object on which the thread was waiting will once again have been placed in an unacceptable state during this time. Thus, you’ll generally put the call to wait()
in a loop that checks the current state of the object. Do not assume that just because the thread was notified, the object is now in the correct state. Check it explicitly if you can’t guarantee that once the object reaches a correct state it will never again reach an incorrect state. For example, this is how the client threads waiting on the logfile entries might look:
private
List
<
String
>
entries
;
public
void
processEntry
()
{
synchronized
(
entries
)
{
// must synchronize on the object we wait on
while
(
entries
.
isEmpty
())
{
try
{
entries
.
wait
();
// We stopped waiting because entries.size() became non-zero
// However we don't know that it's still non-zero so we
// pass through the loop again to test its state now.
}
catch
(
InterruptedException
ex
)
{
// If interrupted, the last entry has been processed so
return
;
}
}
String
entry
=
entries
.
remove
(
entries
.
size
()-
1
);
// process this entry...
}
}
The code reading the logfile and adding entries to the list might look something like this:
public
void
readLogFile
()
{
while
(
true
)
{
String
entry
=
log
.
getNextEntry
();
if
(
entry
==
null
)
{
// There are no more entries to add to the list so
// we interrupt all threads that are still waiting.
// Otherwise, they'll wait forever.
for
(
Thread
thread
:
threads
)
thread
.
interrupt
();
break
;
}
synchronized
(
entries
)
{
entries
.
add
(
0
,
entry
);
entries
.
notifyAll
();
}
}
}
The final way a thread can give up control of the CPU in an orderly fashion is by finishing. When the run()
method returns, the thread dies and other threads can take over. In network applications, this tends to occur with threads that wrap a single blocking operation, such as downloading a file from a server, so that the rest of the application won’t be blocked.
Otherwise, if your run()
method is so simple that it always finishes quickly enough without blocking, there’s a very real question of whether you should spawn a thread at all. There’s a nontrivial amount of overhead for the virtual machine in setting up and tearing down threads. If a thread is finishing in a small fraction of a second anyway, chances are it would finish even faster if you used a simple method call rather than a separate thread.
Adding multiple threads to a program dramatically improves performance, especially for I/O-bound programs such as most network programs. However, threads are not without overhead of their own. Starting a thread and cleaning up after a thread that has died takes a noticeable amount of work from the virtual machine, especially if a program spawns hundreds of threads—not an unusual occurrence for even a low- to medium-volume network server. Even if the threads finish quickly, this can overload the garbage collector or other parts of the VM and hurt performance, just like allocating thousands of any other kind of object every minute. Even more importantly, switching between running threads carries overhead. If the threads are blocking naturally—for instance, by waiting for data from the network—there’s no real penalty to this; but if the threads are CPU-bound, then the total task may finish more quickly if you can avoid a lot of switching between threads. Finally, and most importantly, although threads help make more efficient use of a computer’s limited CPU resources, there is still only a finite amount of resources to go around. Once you’ve spawned enough threads to use all the computer’s available idle time, spawning more threads just wastes MIPS and memory on thread management.
The Executors
class in java.util.concurrent
makes it quite easy to set up thread pools. You simply submit each task as a Runnable
object to the pool. You get back a Future
object you can use to check on the progress of the task.
Let’s look at an example. Suppose you want to gzip every file in the current directory using a java.util.zip.GZIPOutputStream
. This is a filter stream that compresses all the data it writes.
On the one hand, this is an I/O-heavy operation because all the files have to be read and written. On the other hand, data compression is a very CPU-intensive operation, so you don’t want too many threads running at once. This is a good opportunity to use a thread pool. Each client thread will compress files while the main program will determine which files to compress. In this example, the main program is likely to significantly outpace the compressing threads because all it has to do is list the files in a directory. Therefore, it’s not out of the question to fill the pool first, then start the threads that compress the files in the pool. However, to make this example as general as possible, you’ll allow the main program to run in parallel with the zipping threads.
Example 3-13 shows the GZipRunnable
class. It has a single field that identifies the file to compress. The run()
method compresses this file and returns.
import
java.io.*
;
import
java.util.zip.*
;
public
class
GZipRunnable
implements
Runnable
{
private
final
File
input
;
public
GZipRunnable
(
File
input
)
{
this
.
input
=
input
;
}
@Override
public
void
run
()
{
// don't compress an already compressed file
if
(!
input
.
getName
().
endsWith
(
".gz"
))
{
File
output
=
new
File
(
input
.
getParent
(),
input
.
getName
()
+
".gz"
);
if
(!
output
.
exists
())
{
// Don't overwrite an existing file
try
(
// with resources; requires Java 7
InputStream
in
=
new
BufferedInputStream
(
new
FileInputStream
(
input
));
OutputStream
out
=
new
BufferedOutputStream
(
new
GZIPOutputStream
(
new
FileOutputStream
(
output
)));
)
{
int
b
;
while
((
b
=
in
.
read
())
!=
-
1
)
out
.
write
(
b
);
out
.
flush
();
}
catch
(
IOException
ex
)
{
System
.
err
.
println
(
ex
);
}
}
}
}
}
Notice the use of Java 7’s try-with-resources statement in GZipRunnable
. Both the input and output stream are declared at the beginning of the try
block and automatically closed at the end of the try
block.
Also notice the buffering of both input and output. This is very important for performance in I/O-limited applications, and especially important in network programs. At worst, buffering has no impact on performance, while at best it can give you an order of magnitude speedup or more.
Example 3-14 is the main program. It constructs the pool with a fixed thread count of four, and iterates through all the files and directories listed on the command line. Each of those files and files in those directories is used to construct a GZipRunnable
. This runnable is submitted to the pool for eventual processing by one of the four threads.
import
java.io.*
;
import
java.util.concurrent.*
;
public
class
GZipAllFiles
{
public
final
static
int
THREAD_COUNT
=
4
;
public
static
void
main
(
String
[]
args
)
{
ExecutorService
pool
=
Executors
.
newFixedThreadPool
(
THREAD_COUNT
);
for
(
String
filename
:
args
)
{
File
f
=
new
File
(
filename
);
if
(
f
.
exists
())
{
if
(
f
.
isDirectory
())
{
File
[]
files
=
f
.
listFiles
();
for
(
int
i
=
0
;
i
<
files
.
length
;
i
++)
{
if
(!
files
[
i
].
isDirectory
())
{
// don't recurse directories
Runnable
task
=
new
GZipRunnable
(
files
[
i
]);
pool
.
submit
(
task
);
}
}
}
else
{
Runnable
task
=
new
GZipRunnable
(
f
);
pool
.
submit
(
task
);
}
}
}
pool
.
shutdown
();
}
}
Once you have added all the files to the pool, you call pool.shutdown()
. Chances are this happens while there’s still work to be done. This method does not abort pending jobs. It simply notifies the pool that no further tasks will be added to its internal queue and that it should shut down once it has finished all pending work.
Shutting down like this is mostly atypical of the heavily threaded network programs you’ll write because it does have such a definite ending point: the point at which all files are processed. Most network servers continue indefinitely until shut down through an administration interface. In those cases, you may want to invoke shutdownNow()
instead to abort currently processing tasks and skip any pending tasks.