Code excerpt
SSR Request Cache für React / NEXT.JS mit Cacheable-Response
27. August 2019

SSR Request Cache for React / NEXT.JS with Cacheable-Response (English Version)

This page describes how to build an in-memory request cache into a React or Next.JS application using cacheable response. In this example Express Server is used, but it can easily be applied to React and NextJS.

Problem

If you use ReactJS, many think at first about developing a modern, „cool“ application in a tech stack that is fun. Few think so early about the live operation of the application. But at the latest when the topic of SEO and indexability is addressed, Next.JS comes into play.

With Next.JS, React applications can be rendered isomorphic and delivered on the server side (SSR). The user gets a finished rendered page before he continues with the usual API-based browsing.

But if you take a look at the performance of Next.JS you can see:

Next.JS is slow! (Especially for complex pages)

As a result, the SEO-relevant time-to-first byte (TTFB), i.e. the time that elapses before the web server transmits the first byte of the page, increases rapidly and strongly (TTFB). In my projects this time (despite strong server in production) was > 1.7 seconds! Much too long.

As usual in such a case the problem would be solved on the infrastructure level and a proxy cache would be installed (e.g. nginx). In the following I would like to show the „React“ way. This has the following advantages:

  • Easy testing of caching during development
  • Individual cache configuration based on Express or NextJS routes possible
  • No additional complexity – You can use express servers as usual without having to deal with operational issues.
  • If required, the cache configuration can be easily managed via environment variables.
  • Deployment in containers remains simple and scalable
  • Fewer hops of incoming requests due to additional reverse proxy
  • Blueprint for additional caches (e.g. API request caches)

Solution

To include a request cache in React, proceed as follows:

  1. Setting up a Cache Store / Cache Manager
  2. Exclude routes that must not be cached (e.g. everything under /_next/*)
  3. Create routes to be cached via the Cache Manager.
  4. Setting up URL-based cache purging
  5. Setting up the Clear-Complete-Cache Function

Simple Solution

First you have to install the package:

npm install --save cacheable-response

Create a simple cache manager with Cacheable-Response. The function in the gett block describes the function that is executed exactly when data is to be cached for the first time or again.

The send block describes the function for cache access. More information about cacheable response can be found here: https://www.npmjs.com/package/cacheable-response


const express = require('express');
const next = require('next');
const cacheableResponse = require('cacheable-response')

const isDevEnvironment = process.env.NODE_ENV !== 'production'
const nextApp = next({dev: isDevEnvironment, dir: './src'});

const defaultRequestHandler = nextApp.getRequestHandler();

const cacheManager = cacheableResponse({
    ttl: 1000 * 60 * 60, // 1hour
    get: async ({req, res, pagePath, queryParams}) => {
        try {
            return {data: await nextApp.renderToHTML(req, res, pagePath, queryParams)}
        } catch (e) {
            return {data: "error: " + e}
        }
    },
    send: ({data, res}) => {
        res.send(data);
    }
});

nextApp.prepare()
[...]

Then you have to replace the render or handler command with the Cache Manager:


//server.get('*', (req, res) => app.render(req, res, '/index'));
//server.get('*', (req, res) => app.render(req, res, req.url, req.query));
//server.get('*', (req, res) => handle(req, res);

// Serving next data directly without the cache
server.get('/_next/*', (req, res) => {
    defaultRequestHandler(req, res);
});

server.get('*', (req, res) => {
    if (isDevEnvironment || req.query.noCache)
        res.setHeader('X-Cache-Status', 'DISABLED');
        defaultRequestHandler(req, res);
    } else {
        cacheManager({req, res, pagePath: req.path});
    }
});

Important! pagePath describes here the respective Next.JS page (in this case ‚pages/index.js).

The complete server.js will look like this:

´
const express = require('express');
const next = require('next');
const cacheableResponse = require('cacheable-response')

const isDevEnvironment = process.env.NODE_ENV !== 'production'
const nextApp = next({dev: isDevEnvironment, dir: './src'});

const defaultRequestHandler = nextApp.getRequestHandler();

const cacheManager = cacheableResponse({
    ttl: 1000 * 60 * 60, // 1hour
    get: async ({req, res, pagePath, queryParams}) => {
        try {
            return {data: await nextApp.renderToHTML(req, res, pagePath, queryParams)}
        } catch (e) {
            return {data: "error: " + e}
        }
    },
    send: ({data, res}) => {
        res.send(data);
    }
});

nextApp.prepare()
    .then(() => {
        const server = express();

        // Serving next data directly without the cache
        server.get('/_next/*', (req, res) => {
            defaultRequestHandler(req, res);
        });

        server.get('*', (req, res) => {
            if (isDevEnvironment || req.query.noCache) {
                res.setHeader('X-Cache-Status', 'DISABLED');
                defaultRequestHandler(req, res);
            } else {
                cacheManager({req, res, pagePath: req.path, queryParams: req.query});
            }
        });

        server.listen(3000, (err) => {
            if (err) throw err
            console.log('> Ready on http://localhost:3000')
        })
    })
    .catch((ex) => {
        console.error(ex.stack)
        process.exit(1)
    });

Complex Solution (with Clear Cache / Cache Purging)

Now the above example is extended by a few useful functions:

  • Custom Key-Generator (the default algorithm of ‚Cacheable-Response‘ is used for purging) – This allows you to remove dedicated URLs from the cache
  • Manually manage CacheStore – How to manage the cache at all
  • Own purging of single URLs
  • Delete the entire cache
  • Compression of the cache to save memory

First, another dependency is installed:

npm install --save iltorb cacheable-response

In the following, the cache is deleted with the HTTP command PURGE. An alternative/addition would be to delete the cache using the request parameter. However, the PURGE variant is the cleaner way.


const express = require('express');
const next = require('next');
const Keyv = require('keyv');
const {resolve: urlResolve} = require('url');
const normalizeUrl = require('normalize-url');
const cacheableResponse = require('cacheable-response');


const isDevEnvironment = process.env.NODE_ENV !== 'production';

const nextApp = next({dev: isDevEnvironment, dir: './src'});

const defaultRequestHandler = nextApp.getRequestHandler();

const cacheStore = new Keyv({namespace: 'ssr-cache'});

const _getSSRCacheKey = req => {
    const url = urlResolve('http://localhost', req.url);
    const {origin} = new URL(url);
    const baseKey = normalizeUrl(url, {
        removeQueryParameters: [
            'embed',
            'filter',
            'force',
            'proxy',
            'ref',
            /^utm_\w+/i
        ]
    });
    return baseKey.replace(origin, '').replace('/?', '')
};

const cacheManager = cacheableResponse({
    ttl: 1000 * 60 * 60, // 1hour
    get: async ({req, res, pagePath, queryParams}) => {
        try {
            return {data: await nextApp.renderToHTML(req, res, pagePath, queryParams)}
        } catch (e) {
            return {data: "error: " + e}
        }
    },
    send: ({data, res}) => {
        res.send(data);
    },
    cache: cacheStore,
    getKey: _getSSRCacheKey,
    compress: true
});

function clearCompleteCache(res, req) {
    cacheStore.clear();
    res.status(200);
    res.send({
        path: req.hostname + req.baseUrl,
        purged: true,
        clearedCompleteCache: true
    });
    res.end();
}

function clearCacheForRequestUrl(req, res) {
    let key = _getSSRCacheKey(req);
    console.log(key);
    cacheStore.delete(key);
    res.status(200);
    res.send({
        path: req.hostname + req.baseUrl + req.path,
        key: key,
        purged: true,
        clearedCompleteCache: false
    });
    res.end();
}

nextApp.prepare()
    .then(() => {
        const server = express();

        // Do not use caching for _next files
        server.get('/_next/*', (req, res) => {
            defaultRequestHandler(req, res);
        });

        server.get('*', (req, res) => {
            if (isDevEnvironment || req.query.noCache) {
                res.setHeader('X-Cache-Status', 'DISABLED');
                defaultRequestHandler(req, res);
            } else {
                cacheManager({req, res, pagePath: req.path});
            }
        });

        server.purge('*', (req, res) => {
            if (req.query.clearCache) {
                clearCompleteCache(res, req);
            } else {
                clearCacheForRequestUrl(req, res);
            }
        });

        server.listen(3000, (err) => {
            if (err) throw err;
            console.log('> Ready on http://localhost:3000')
        })
    })
    .catch((ex) => {
        console.error(ex.stack);
        process.exit(1)
    });

To test if the cache works, you can have a look at the Response Header. There should be the entry „X-Cache-Status: MISS“ on the first call, an entry „X-Cache-Status: HIT“ on the second, and an „X-Cache-Status: DISABLED“ in the DEV environment or if the parameter „?noCache=true“ was specified:

Advantages and Disadvantages

With Cacheable-Response you have the advantage that a lot of Boylerplate code is taken from you.

The disadvantage is that you have no control over the size of the cache. Especially for large pages with a complex sitemap, the memory consumption can increase considerably.

In another article I will describe how to solve this problem with a more complex approach using LRU cache. A simple variant can also be found here: Speeding up next.js application (server side in-memory caching via LRUcache).

Bottom Line

Even though I usually like to go with the simpler solution, I think an implementation with a LRU cache makes more sense here. Especially the possibility to configure the size of the cache is in my eyes a strength to reduce the risk of memory leaks through caching.

If memory doesn’t play a role (e.g. because there is enough memory or the application isn’t big enough to get into trouble here) a request cache with „cacheable response“ is my first choice because of the simple and comfortable usability.

I hope I was able to help someone 😁

References

Es können keine Kommentare abgegeben werden.