How it works...

We know from the Optimizing a synchronous function call recipe that reduce is potentially expensive, this was proved again by profiling and flamegraph visualization.

Once that was removed the only remaining user-land code (code we have direct control over) was the sum function. So we checked to see whether it was being optimized. We could have checked this using --trace-opt, trace-deopt, and maybe --trace-inlining or using the native syntax %GetOptimizationStatus function, but in this case, we used flamegraphs to quickly locate and check the optimization status of our sum function.

Whenever we allocate a new function that's going to be called many times, we ideally want it to be optimized by V8.

The soonest V8 can optimize a new function, is after its first invocation.

Node.js is built around callbacks and functions, the prevailing pattern for asynchronous interaction (when we need to wait for some I/O), is to allocate a new function thereby wrapping the state in a closure.

However, by identifying areas of CPU-intensive behavior within an asynchronous context and ensuring that such logic is instantiated in a function once only at the top-level, we can assure we deliver the best possible performance for our users.

Reusify
For an advanced function-reuse method to trigger V8 optimizations check out the http://npm.im/reusify utility module.