As the name suggests, the filestore object store works by storing RADOS objects as files on a standard Linux filesystem. In most cases, this will be XFS. As each object is stored as a file, there will likely be hundreds of thousands, if not millions, of files per disk. A Ceph cluster is composed of 8 TB disks and is used for an RBD workload. Assuming that the RBD is made up of the standard 4 MB objects, there would be nearly 2,000,000 objects per disk.
When an application asks Linux to read or write to a file on a filesystem, it needs to know where that file actually exists on the disk. To find this location, it needs to follow the structure of directory entries and inodes. Each one of these lookups will require disk access if it's not already cached in memory. This can lead to poor performance in some cases if the Ceph objects, which are required to be read or written to, haven't been accessed in a while and are hence not cached. This penalty is a lot higher in spinning disk clusters as opposed to SSD-based clusters, due to the impact of the random reads.
By default, Linux favors the caching of data in the page cache versus the caching of inodes and directory entries. In many cases in Ceph, this is the opposite of what you want to happen. Luckily, there is a tuneable kernel that allows you to tell Linux to prefer directory entries and inodes over page caches; this can be controlled with the following sysctl setting:
vm.vfs_cache_pressure
Where a lower number sets a preference to cache inodes and directory entries, do not set this to zero. A zero setting tells the kernel not to flush old entries, even in the event of a low-memory condition, and can have adverse effects. A value of 1 is recommended.