From Isomorphic Web Applications by Elyse Kolker Gordon

This article, adapted from chapter 11 of Isomorphic Web Applications, discusses caching and its importance as a server performance tool.

Save 37% on Isomorphic Web Applications. Just enter code fccgordon into the discount code box at checkout at


Caching is a powerful server performance tool. This is a tool that I’ve employed in different forms, including edge caching, in-memory caching, and saving views in a Redis (a NoSQL database) persisted cache. Each of these strategies has tradeoffs, and it’s important to understand what these are and then pick the right strategy for your use case.

Table 1 Comparing caching options.


User Sessions

In memory



Persisted storage


(higher overhead, but possible)

Edge caching


Caching on the server: In-memory caching

The easiest (and most naïve) solution for caching involves saving components directly in memory. For simple apps, you can achieve this using a basic LRU cache (size-limited) and stringifying your components after they’re rendered. Figure 1 shows a timeline of using an in-memory cache. The first user to load a page gets a fully rendered (and slower) version of the page. This is also saved in the in-memory cache. All subsequent users get the cached version, until that page gets pushed out of the cache because the cache filled up.

Figure 1 In-memory caching allows some requests to benefit from faster response times.

Listing 1 shows how to add a simple caching module (abstracting this code makes it easier to update caching strategies to match your future needs). You should add this code to the new cache.es6 file in the shared directory.

Listing 1: Add an LRU in memory cache – src/shared/cache.es6

 import lru from 'lru-cache';                                           ? 
 // maxAge is in ms
 const cache = lru({                                                    ? 
   maxAge: 300000,                                                      ? 
   max: 500000000000,                                                   ? 
   length: (n) => {                                                     ? 
     // n = item passed in to be saved (value)
     return n.length * 100;
 export const set = (key, value) => {                                   ? 
   cache.set(key, value);
 export const get = (key) => {                                          ? 
   return cache.get(key);
 export default {

? Import the lru cache

? Create the lru cache

? maxAge sets a time-based expiration for values stored in the cache.

? max is the total allowed length of all items in the cache

? length is the individual max allowed length for each value added.

? This is a public set method that sets the key/value pair on the cache.

? This is a public get method that retrieves a value based on a key from the cache.

Listing 2 shows how to take advantage of the caching module in renderView.jsx. Add the following code to the module. I recommend either using the caching logic or the streaming logic, but not both at the same time. If you want to cache and stream, you’ll need a different streaming implementation than the one shown in this article.

Listing 2: Save and fetch cached pages – src/middleware/renderView.jsx

 const cachedPage = cache.get(req.url);                                 ? 
 if (cachedPage) {                                                      ? 
   return res.send(cachedPage);                                         ? 
 const store = initRedux();
 //...more code
 Promise.all(promises).then(() => {
   //...more code
   cache.set(req.url, `${html}`);                        ? 
   return res.send(`${html}`);

? Try and retrieve the value from the cache

? If the value exists, use it to respond to the request.

? If a full-page render is required, then save the rendered page before responding to the request.

This works, but there are some problems with this strategy.

  1. This solution is simple – but what happens when the use cases get more complex? What happens as you start to add users? Or multiple languages? Or you have tens of thousands of pages? This methodology doesn’t scale well to these use cases.
  2. Writing to memory is a blocking task in Node.js, which means that if we’re trying to optimize for performance by using a cache, we’re trading one problem for another.
  3. Finally, if you’re using a distributed scaling strategy to run your servers (which is common these days), the cache only applies to a single box or container (if using Docker). In this case, your server instances can’t share a common cache.

Next, we’ll look at another strategy: caching with Redis. This allows the caching to be done asynchronously and non-blocking. We’ll also look at using a smarter caching implementation to cache individual components, which scales better for more complex applications.

Caching on the server: Persisted Storage

The first isomorphic React app I worked on was written before Redux and React Router were stable, community best-choice libraries. Therefore, we made the decision to home roll a lot of the code. Combine this decision with React being slow on the server and we needed a solution that’d speed up server renders.

What we implemented was string storage of full pages in Redis. Storing full pages in Redis has significant tradeoffs for larger sites. We had the potential for millions of different entries to end up stored in Redis, and because full stringified HTML pages add up pretty fast, we’re using quite a bit of space.

Thankfully, the community has come up with some improvements on this idea since then. Walmart Labs put out a library called electrode-react-ssr-caching which is easy to use to cache your server-side renders. This library is powerful for a couple of reasons:

  1. The library comes with a profiler which tells you which components are most expensive on the server. This allows you to only cache the components you need.
  2. The library provides a way to template components to cache the rendered components and insert the props later on.
  3. In the long run, due to the number of pages we serve and the percentage of them which are served with 100% public-facing content, we ended up moving to an edge caching strategy. Your use case may benefit from the Walmart Labs approach.

CDN/Edge strategies

Edge caching is the solution that we currently use for our isomorphic React app at work. This is due to some business logic needing to expire content on demand (when things change at other points in the system, like in a CMS tool). Modern CDNs like Fastly provide this capability out of the box and make it much easier to manage TTLs (time to live) and to force expire web pages. Figure 2 illustrates how this works.

Figure 2 Adding an edge server moves the caching in front of the server.

Showing you how to implement this is outside of the scope of this article. If you have public-facing content that drives SEO (eCommerce, video sites, blogs, etc) as part of your specialized marketing strategy, you’ll definitely want a CDN in your stack. Building upon your SEO is very important with marketing (so it’s important that you make sure that you know about different SEO Practices to improve Traffic to help ensure that your e-commerce store or site is successful), for example, if you are a law company and need to bring in clients for a specific reason such as criminal defense, you’ll use your SEO to accomplish good criminal defense lawyer marketing. In addition to this, once you have begun implementing the above strategies to increase traffic to your website, you will want to monitor the success of the strategies. This can be done by using website traffic estimators and they will check your website traffic and provide you with a report that you can then use to know which strategies are making the most difference.

One caveat with this approach is that it complicates user session management. The next section explores user sessions and covers the tradeoffs with various caching strategies.

User Session Management

Modern web applications use cookies in the browser almost without exception. Even if your main product isn’t directly using cookies, any ads, tracking, or other third-party tools you use on your site will take advantage of cookies. Cookies let the web app know that the same person has come back over time. Figure 3 illustrates how this works.

Figure 3 Repeat visits by the same user on the server. Saving cookies lets you store information about the user that can be retrieved during future sessions.

Listing 3 shows an example module that handles both the browser and server cookie parsing for you. It uses universal cookie to help manage the cookies in both environments: You need to install this library for the code to work:

 $ npm install --save universal-cookie

Add the code in the listing to a new module src/shared/cookies.es6.

Listing 3 Using Isomorphic Cookie Module – src/shared/cookies.es6

 import Cookie from 'universal-cookie';                                  ? 
 const initCookie = (reqHeaders) => {
   let cookies;
   if (process.env.BROWSER) {                                            ? 
     cookies = new Cookie();
   } else if (reqHeaders.cookie) {
     cookies = new Cookie(reqHeaders.cookie);                            ? 
   return cookies;
 export const get = (name, reqHeaders = {}) => {
   const cookies = initCookie(reqHeaders);                               ? 
   if (cookies) {
     return cookies.get(name);                                           ? 
 export const set = (name, value, opts, reqHeaders = {}) => {
   const cookies = initCookie(reqHeaders);                               ? 
   if (cookies) {
     return cookies.set(name, value, opts);                              ? 
 export default {

? Import the universal cookie library, which handles the differences between accessing browser and server cookies for you.

? Check the environment to determine if reqHeaders are needed.

? If the headers have cookies, pass this into the cookie constructor.

? In the getter and setter functions, initialize the cookie object, passing in reqHeaders to ensure that it works on the server.

? Return the result of the cookie lookup.

? Return the result of setting the cookie. In addition to a name and value, you can also pass in all standard cookie options. In most cases you’ll call set from the browser.

Now that you’ve added a way to get and set cookies in both environments, you need to be able to store that information on the app state to access it in a consistent way in your application.

Accessing cookies universally

By fetching cookies with an action, you can standardize how the app interacts with cookies. Listing 4 shows how to add a storeUserId action to fetch and store the user id. Add this code to the app-action-creators file.

Listing 4: accessing cookies on the server – src/shared/app-action-creators.es6

 import UAParser from 'ua-parser-js';
 import cookies from './cookies.es6';                                    ? 
 export const STORE_USER_ID = 'STORE_USER_ID';                           ? 
 export function parseUserAgent(requestHeaders) {}
 export function storeUserId(requestHeaders) {                           ? 
   const userId = cookies.get('userId', requestHeaders);                 ? 
   return {
     userId,                                                             ? 
     type: STORE_USER_ID                                                 ? 
 export default {

? Import the cookie module

? Add a type for the new action.

? Add the action, which takes in requestHeaders to ensure it works on the server.

? Pass the cookie name and requestHeaders to the cookie module.

? Put the userId value on the action.

Now you have access to the userId in your application! It’ll be fetched on the server and can be updated later in the browser as needed. You can apply this concept to any and all user session information. However, managing user sessions as a whole is outside of the scope of this article.

Edge Caching + Users

When I first started building isomorphic applications, user management seemed simple. You use cookies to track user sessions in the browser like in a single-page application. Adding in the server complicates this, but you can read the cookies on the server. As you add in caching strategies this becomes less straightforward.

Both the in-memory and persisted storage caching strategies work better with user sessions, as each user request still goes to the server allowing the user’s information to be gathered. You can add the user’s identifying information into your cache key.

Unfortunately, edge caching doesn’t work as well. This is because for each unique user, you must keep a unique copy of each page that has user-specific data on it. If you don’t, you could end up showing user 1’s information to user 2. This would be bad! Figure 4 illustrates this concept.

Figure 4 When the edge has to cache pages per user, the benefit of overlapping requests is lost.

If you need to use edge caching, you have user data and you can employ one or more of the following strategies (depending on your content type and your traffic patterns):

  • Create pages that either have user content, or general consumption content (public). Then only cache the pages that are public on your edge servers.
  • Save a cookie that tells the edge server if the user is in an active user session. Use this information to determine whether to serve a cached page or to send the request to the server (pass through).
  • Serve pages with placeholder content (solid shapes that show where content will load). Then decided what content to load in the browser.

Now that you understand a good bit about caching!

If you want to learn more about the book, check it out on liveBook here and see this slide deck.