If you have far more reads than writes, model caching may help lighten the load on the database server. The standard in-memory cache these days is memcached.[41] Developed for LiveJournal, memcached is a distributed cache that functions as a giant hashtable. Because of its simplicity, it is scalable and fast. It is designed never to block, so there is no risk of deadlock. There are four simple operations on the cache, each completing in constant time.
You can actually use memcached in several different places in Rails. It is available as a session store or a fragment cache store out of the box, assuming the ruby-memcache gem is installed. It can also be used to store complete models—but remember that this will only be effective for applications where reads vastly outnumber writes. There are two libraries that cover model caching: cached_model and acts_as_cached.
The cached_model library (http://dev.robotcoop.com/Libraries/cached_model/index. html)
provides an abstract subclass of ActiveRecord::Base, CachedModel
. It attempts
to be as transparent as possible, just caching the simple queries
against single objects and not trying to do anything fancy. It does have
the disadvantage that all cached models must inherit from CachedModel
. Use of cached_model is dead
simple:
class Client < CachedModel end
On the other hand, the acts_as_cached plugin (http://errtheblog.com/post/27) gives you more specificity over what is cached. It feels more like programming against memcached's API, but there is more power and less verbosity. It has support for relationships between objects, and it can even version each key to invalidate old keys during a schema change. A sample instance of acts_as_cached might look like this:
class Client < ActiveRecord::Base acts_as_cached # We have to expire the cache ourselves upon significant changes after_save :expire_me after_destroy :expire_me protected def expire_me expire_cache(id) end end
Of course, the proper solution for you will depend on the specific needs of the application. Keep in mind that any caching is primarily about optimization, and the old warnings against premature optimization always apply. Optimization should always be targeted at a specific, measured performance problem. Without specificity, you don't know what metric you are (or should be) measuring. Without measurement, you don't know when or by how much you've improved it.
[41] Pronounced "mem-cache-dee," for "memory cache daemon." Available from http://danga.com/memcached/.