In programming terms, caching refers to storing a value (or values) for quick retrieval in the future. Typically, you'd do this with values that are slow to compute for some reason; for example, they require hitting an external API to retrieve, or they involve a lot of number-crunching to generate.

Cached values are often stored on a separate server, like memcached or Redis. They can be stored on a disk or in RAM. In code, we often 'cache' data inside of variables to avoid calling expensive functions multiple times.

data = some_calculation()
a(data)
b(data)

The trade-off for all the speed you gain is that you're using old data. What if the cached data become 'stale' and are no longer accurate? You'll have to clear the cache to 'invalidate' it.

The Argument Against Caching

As the old saying goes, there are only 2 hard problems in computer science: 1. Naming things 2. Cache invalidation 3. Off-by-one errors

Why is cache invalidation so difficult? A cached value, by its very nature, 'hides' a real value. Any time the 'real' value changes, you (yes, you, the programmer) have to remember to 'invalidate' the cache so that it will get updated.

Suppose you're adding a 'word count' widget to a text editor. You need to update the word count as the user types. The simplest approach is to re-count the words on every keystroke, but this is too slow. There is another approach:

  1. Count the words when loading the file.
  2. Save this word-count to a variable (or 'cache it').
  3. Display the contents of the variable to screen.

This implementation is much faster, but the cached 'word count' doesn't change as we type. To do so, we need to 'invalidate' the cache whenever we expect the word count to change.

Now, as keystrokes come are made, you will detect words (i.e., spaces) and increment the word counter. Of course, you'll also decrement it when the user is deleting words. Easy. Done. Next ticket.

...But wait, did you remember to update the word count when the user cuts text to the clipboard? What about when they paste text? What about when the spell-checker splits a typo into two words?

The problem here isn't updating the value, which is fairly trivial. The problem is that you have to remember to update it in every single place. Missing just one of these updates causes cache invalidation problems, meaning you'll be displaying a stale value to the user.

With this in mind, you can see that adding in caching brings in technical complexity and potential sources of bugs. Of course, these problems can be solved, but it is something to keep in mind before jumping to caching as the solution.

Speed Without Caching

If we take caching off the table, speeding up our application is all about identifying and fixing performance bottlenecks - systems that are slower than they could be. We can group them into three overall categories:

  1. Database queries (either too many or too slow)
  2. View rendering
  3. Application code (e.g., performing heavy calculations)

When working on performance, there are two techniques you need to know about to make headway: profiling and benchmarking.

Profiling

Profiling is how you know where the problems are in your app: Is this page slow because rendering the template is slow? Or, is it slow because it's hitting the database a million times?

For Ruby on Rails, I'd recommend rack-mini-profiler, which adds a nice little widget to the edge of your app. It gives you a good overview of what it took to render the page you're looking at, such as how many database queries were fired off, how long they took, and how many partials were rendered.

For production (pro-tip: rack-mini-profiler works well in production; just make sure it only appears for certain users, such as admins or developers), there are online services, including Skylight, New Relic, and Scout, that monitor page performance.

The typically cited target <= 100ms is great for page rendering, as anything less than this is difficult for a user to detect anyway in real-world internet usage. Your target will vary depending on many factors. At one point, when working on a legacy application with terrible performance, I made <= 1 second the target, which is not great but a heck of a lot better than when I started.

Benchmarking

Once we figure out where the problem is, then we can use benchmarks to see what (if any) effect our optimization had on performance. Personally, I like using the benchmark-ips gem for this kind of work, as it gives you an easy human-readable way to see the differences your code has made.

As a trivial example, here's a comparison of string concatenation vs string interpolation:

require 'benchmark/ips'

@a = "abc"
@b = "def"
Benchmark.ips do |x|
  x.report("Concatenation") { @a + @b }
  x.report("Interpolation") { "#{@a}#{@b}" }
  x.compare!
end

and the results:

Warming up --------------------------------------
       Concatenation   316.022k i/100ms
       Interpolation   282.422k i/100ms
Calculating -------------------------------------
       Concatenation     10.353M (± 7.4%) i/s -     51.512M in   5.016567s
       Interpolation      6.615M (± 6.8%) i/s -     33.043M in   5.023636s

Comparison:
       Concatenation: 10112435.3 i/s
       Interpolation:  6721867.3 i/s - 1.50x  slower

This gives us a nice human-readable result, and interpolation is 1.5 times slower than concatenation (at least for our small strings). For this reason, I'd also recommend copying the method you're trying to improve and giving it a new name. You can then run quick comparisons to see if you're improving its performance as you go.

Fixing Performance Issues

At this point, we know what parts of our app are slow. We have benchmarks in place to measure any improvement when it happens. Now, we just need to do the actual work of optimizing performance. The techniques you choose will depend on where your issues are: in the database, views, or application.

Database Performance

For database-related performance issues, there are a few things to look at. First, avoid the dreaded 'N+1 queries.' Situations like this often occur in rendering a collection in a view. For example, you have a user with 10 blog posts, and you want to display the user and all of his or her posts. A naive first-cut might be something like this:

# Controller
def show
  @user = User.find(params[:id])
end
# View
Name: <%= @user.name %>
Posts:
  <%= @user.posts each do |post| %>
    <div>Title: <%= post.title %></div>
  <% end %>

The approach shown above will get the user (1 query) and then fire off a query for each individual post (N=10 queries), resulting in 11 total (or N+1). Fortunately, Rails provides a simple solution to this problem by adding .includes(:post) to your ActiveRecord query. So, in the above example, we just change the controller code to the following:

def show
  @user = User.includes(:post).find(params[:id])
end

Now, we will be fetching the user and all of his or posts in one database query.

Another thing to look for is where you can push calculations into the database, which is usually faster than performing the same operation in your application. A common form of this is aggregations like the following:

total = Model.all.map(&:somefield).sum

This is grabbing all the records from the database, but the actual summing of the values happens in Ruby. We can speed this up by having the database perform the calculation for us like so:

total = Model.sum(:somefield)

Perhaps you need something more complicated, such as multiplying two columns:

total = Model.sum('columna * columnb')

Common databases support basic arithmetic like this and also common aggregations like sum and average, so be on the lookout for map(...).sum calls in your codebase.

View Performance

Although I would say template-related performance woes lend themselves more to caching as a solution, there is still some low-hanging fruit that you may want to rule out first.

For general page-load times, you can check that you are using minified sources for any Javascript or CSS libraries (on production servers at least).

Also, watch out for large numbers of partials being included. If your _widget.html.erb template takes 1ms to process, but you have 100 widgets on the page, then that's 100ms gone already. One solution is to reconsider your UI. Having 100 widgets on the screen at once is usually not a great user experience, and you may want to look at using some form of pagination or, perhaps, an even more drastic UI/UX overhaul.

Application Code Performance

If your performance issue is in the application code itself (i.e., the manipulation of data) rather than the view or database layers, you have a couple of options. One is to see if at least some of the work could be pushed into the database either as queries, as described above, or as database views with, perhaps, something like the scenic gem).

Another option is to move the 'heavy lifting' into a background job, though this may require changes to your UI to handle the fact that the value is now going to be computed asynchronously.

I Still Need Caching; Now What?

Having made it through all this, maybe you've decided that yes, caching is the solution you need. So, what should you do? Stay tuned because this is the first in a series of articles covering different forms of caching available within Ruby on Rails.

Get the Honeybadger newsletter

Each month we share news, best practices, and stories from the DevOps & monitoring community—exclusively for developers like you.
    author photo
    Jonathan Miles

    Jonathan began his career as a C/C++ developer but has since transitioned to web development with Ruby on Rails. 3D printing is his main hobby but lately all his spare time is taken up with being a first-time dad to a rambunctious toddler.

    More articles by Jonathan Miles
    An advertisement for Honeybadger that reads 'Turn your logs into events.'

    "Splunk-like querying without having to sell my kidneys? nice"

    That’s a direct quote from someone who just saw Honeybadger Insights. It’s a bit like Papertrail or DataDog—but with just the good parts and a reasonable price tag.

    Best of all, Insights logging is available on our free tier as part of a comprehensive monitoring suite including error tracking, uptime monitoring, status pages, and more.

    Start logging for FREE
    Simple 5-minute setup — No credit card required