I believe Rails.cache
uses the ActiveSupport::Cache::Store
interface, which has a read_multi
method for this exact purpose. [1]
I think swapping out fetch
for read_multi
will improve your performance because ActiveSupport::Cache::MemCacheStore
has an optimized implementation of read_multi
. [2]
Code
Here's the updated implementation:
keys = @messages.collect { |m| "message/#{m.id}/#{m.updated_at.to_i}" }
hits = Rails.cache.read_multi(*keys)
keys.each_with_index do |key, i|
if hits.include?(key)
@messages[i] = hits[key]
else
Rails.cache.write(key, @messages[i] = @messages[i].as_json)
end
end
The cache writes are still performed synchronously with one round trip to the cache for each miss. If you want to cut down on that overhead, look into running background code asynchronously with something like workling.
Be careful that the overhead of starting the asynchronous job is actually less than the overhead of Rails.cache.write
before you start expanding your architecture.
Memcached Multi-Set
It looks like the Memcached team has at least considered providing Multi-Set (batch write) commands, but there aren't any ActiveSupport interfaces for it yet and it's unclear what level of support is provided by implementations. [3]