While writing the cache to disk may seem counter-intuitive from a performance perspective, you need to take into account that the Linux kernel will cache file access to memory. With enough free memory, the file will be read once and then each subsequent calls will be direct from RAM. While this may take more memory than a standard configuration, a typical website can be as little as 64 MB in total, which is trivial by modern standards.
Having the cache disk-based means that it's also persistent between reboots or restarts of NGINX. One of the biggest issues with the cold start of a server is the load on the system until it has had a chance to warm the cache. If you need to ensure that the loading of any cache file from disk is as fast as possible, I'd recommend ensuring that the cache is stored on a high-speed Solid State Drive (SSD).