To demonstrate the difference between sequential and parallel processing, let's imagine a system that collects data from 10 physical devices (sensors) and calculates an average. The following is the get() method that collects a measurement from a sensor identified by ID:
double get(String id){
try{
TimeUnit.MILLISECONDS.sleep(100);
} catch(InterruptedException ex){
ex.printStackTrace();
}
return id * Math.random();
}
We have put a delay for 100 milliseconds to imitate the time it takes to collect the measurement from the sensor. As for the resulting measurement value, we use the Math.random() method. We are going to call this get() method using an object of the MeasuringSystem class, where the method belongs.
Then we are going to calculate an average – to offset the errors and other idiosyncrasies of an individual device:
void getAverage(Stream<Integer> ids) {
LocalTime start = LocalTime.now();
double a = ids.mapToDouble(id -> new MeasuringSystem().get(id))
.average()
.orElse(0);
System.out.println((Math.round(a * 100.) / 100.) + " in " +
Duration.between(start, LocalTime.now()).toMillis() + " ms");
}
Notice how we convert the stream of IDs into DoubleStream using the mapToDouble() operation so we can apply the average() operation. The average() operation returns an Optional<Double> object, and we call its orElse(0) method that returns either the calculated value or zero (if, for example, the measuring system could not connect to any of its sensors and returned an empty stream).
The last line of the getAverage() method prints the result and the time it took to calculate it. In real code, we would return the result and use it for other calculations. But, for demonstration, we just print it.
Now we can compare the performance of a sequential stream processing with the performance of the parallel processing:
List<Integer> ids = IntStream.range(1, 11)
.mapToObj(i -> i)
.collect(Collectors.toList());
getAverage(ids.stream()); //prints: 2.99 in 1030 ms
getAverage(ids.parallelStream()); //prints: 2.34 in 214 ms
The results may be different if you run this example because, as you may recall, we simulate the collected measurements as random values.
As you can see, the processing of a parallel stream is five times faster than the processing of a sequential stream. The results are different because the measurement produces a slightly different result every time.
Although behind the scenes, the parallel stream uses asynchronous processing, this is not what programmers have in mind when talking about the asynchronous processing of requests. From the application's perspective, it is just parallel (also called concurrent) processing. It is faster than sequential processing, but the main thread has to wait until all the calls are made and the data is retrieved. If each call takes at least 100 ms (as it is in our case), then the processing of all the calls cannot be completed in less time.
Of course, we can create a child thread and let it make all the calls and wait until they are complete, while the main thread does something else. We even can create a service that does it, so the application would just tell such a service what has to be done and then continue doing something else. Later, the main thread can call the service again and get the result or pick it up in some agreed location.
That would be the truly asynchronous processing the programmers are talking about. But, before writing such a code, let's look at the CompletableFuture class located in the java.util.concurrent package. It does everything described, and even more.