I recently came across a problem while working on integrating a third party API into my Nuxt application. The service was using a MongoDB database to store its data, and the API was very slow to respond to read operations because of the high latency between the server and the database and the large amount of data that needed to be fetched and aggregated.
When facing this kind of issue, we have two options:
- First, we can try to optimize the database queries to make them faster, which in general can be achieved by using indexes and precomputing the aggregations (spending more time on write operations to save on read operations). Sometimes it's not on our side, and we can't do much about it.
- We can distribute the database to be closer to the server, again, is not always possible or easy to do.
- Or we can cache the result of our API calls to avoid hitting the database every time a user requests the same data. That is what we will be covering in this article.
In this article, we will create a simple blog application, with a blog post listing page and a blog post details page, with latency on read operations to simulate the problem. We will also create an API to publish new articles and edit existing ones, so we can test our cache invalidation.
You will find a working demo on StackBlitz at the end of the article.
What is Nuxt? and Nitro?
First of all, let's start by defining what Nuxt and Nitro are.
Nuxt is a framework for building universal applications with Vue for the frontend side, but it also provides a lot of features for the backend side, like server middleware, server routing, server cache, etc. The framework is built with a lot of small pieces that, when combined, let us build our applications the way we want, with a lot of flexibility. We will use some of them, like h3
, nitro
and unstorage
, to build our application.
Those pieces are independent projects that can be used separately, but they are also designed to work together. They are part of the UnJS (Unified JavaScript) ecosystem, maintained by the Nuxt core team, which aims to provide a universal JavaScript ecosystem that can run on any platform (e.g. Node.js, Deno, Cloudflare Workers, etc.). As an example, H3 is a universal wrapper to handle HTTP requests and responses, Unstorage is a universal wrapper to handle key/value storage, and Nitro is a universal server framework that glues everything together.
Nitro is like Express, but not only for Node.js, it's for any platform, with super powers!
I recommend you check the UnJS GitHub organization and the UnJS blog to learn more about the different projects that are part of the UnJS ecosystem.
You can also watch the Pooya Parsa (pi0) talk on the vision for the future of Nitro at Nuxt Nation 2023 to learn more.
Power of Unstorage
So, now we know that Unstorage is a universal wrapper to handle key/value storage, but what does it mean?
It provides a unified way to handle key/value storage, no matter where the storage is located. It can be in memory, on the file system, in a Redis database, using Cloudflare Workers KV, etc.
The API is very simple, we can easily get, set and remove items from storage, no matter where it is located. And we can even use the same API to handle multiple storages at the same time. It can be configured to use multiple storage drivers at any time, without having to change the code that uses it.
This is a basic example from the readme on how to use it (we will cover more advanced use cases later in this article):
import { createStorage } from "unstorage";
const storage = createStorage(/* opts */);
await storage.getItem("foo:bar");
You can find all available storage drivers on the Unstorage documentation.
What is Nitro Cache API?
You may be asking yourself, why are we talking about Unstorage? Well, Nitro Cache API is built on top of Unstorage, so it's important to understand how it works before we can dive into Nitro's caching features.
The API is very simple, we can wrap any function call with cachedFunction
, or we can replace any event handler with defineCachedEventHandler
. The cache is stored in Unstorage, so we can use any storage driver we want!
We can also use Route Rules configuration in order to wrap our existing event handlers, so they can cache their results as well. It uses the same API under the hood.
By default, the cache is stored in memory on production environment, and in .nuxt/cache
during development. However, we can configure it to use any storage driver we want. We will see later in this article how and why we would want to do that.
The problem with cache
Cache is a great way to improve performance by avoiding network and computation charges, but it can also be a source of problems if not used correctly.
We need to understand what cache is and how it works in order to avoid common pitfalls. There are basically two types of cache:
- client side cache, stored in the browser
- used to avoid fetching the same data again over the network
- works via the response headers sent by the server
- useful to improve performance for the same user, but it doesn't work for other users
- server side cache, stored in the server
- can be done in multiple ways depending on the needs
- in application cache
- via a proxy (e.g. HAProxy, Nginx, Cloudflare, etc.)
- etc.
- reduces the load on the server
- useful to improve performance for all users
- we should be careful to not cache sensitive data, like user data
- can be done in multiple ways depending on the needs
We won't cover client side cache in this article, as this could be a topic for another one (let me know in the comments if you are interested).
So what problems can we face when using server side cache?
Well, the data might be cached forever and we might never get the most recently added content. A simple solution to this is to set an expiration time on the cache. If we know that the data won't change for a long time, we can set a long expiration time, but if we update the data just after the cache expires, we will have to wait for the next expiration time to get the latest content.
We can also have the opposite problem, the cache may be invalidated too often, and we will have to fetch the data again and again, which will increase the load on the server and reduce the performance for the users. Our cache will be basically useless.
What solution do we have to solve these problems?
That's where cache invalidation comes into play, it's the process of removing the cache when the data changes, so we can get the latest content as soon as possible.
Setup the case
Here we are, we know what cache is, how it works, and what problems we can face when using it, so let's setup our case.
First, we need to setup a new nuxt project, which we can do with the following command:
npx nuxi@latest init <project-name>
Create our CRUD operations
For this example we will simulate an API that takes a bit of time to respond to read operations.
In order to do this, we will use the unstorage
package and create a simple set of CRUD operations. We won't cover delete operations in this example as they are similar to edit operations.
I won't go into details about this part, as it's not the focus of this article, but here is the code to access our data:
export interface Article {
id: string;
publishedAt: string;
editedAt?: string;
title: string;
content: string;
}
export const getArticle = async (id: string) => {
// fake long read operation
await new Promise((resolve) => setTimeout(resolve, 2000));
const storage = useStorage('data:articles');
return await storage.getItem<Article>(id);
};
Defining these in the ~/server/utils
directory will allow us to use them anywhere in our server code,
thanks to auto import!
Exposing our API endpoints
Now let's create our API in order to reproduce the problem we are trying to solve.
Read operations
Those are the read operation that take time to respond. We will cache them to improve the performance.
export default defineEventHandler(() => getArticles());
Write operations
Those are the write operations that should invalidate the cache when they are called.
export default defineEventHandler<{
body: {
title?: string;
content?: string;
};
}>(async (event) => {
assertMethod(event, 'POST');
const body = await readBody(event);
// validate body
// publish new article to backend
const article = await publishArticle({
id: (Math.random() + 1).toString(32).substring(2, 9),
publishedAt: new Date().toISOString(),
title: body.title,
content: body.content,
});
// tip: always return something, otherwise it will return a not found error
return article;
});
Note that
defineEventHandler
,assertMethod
,readBody
andgetRouterParam
are part ofh3
package from the UnJS ecosystem.
At this point we are able to perform HTTP requests to our API endpoints. You can test this by running the following command in your terminal:
curl http://localhost:3000/api/articles
We can post new articles with the following command:
curl \
-X POST \
-H "Content-Type: application/json" \
-d '{"title":"My new article","content":"This is my new article"}' \
http://localhost:3000/api/publish
We get the article id that we can then use to get the article details:
curl http://localhost:3000/api/articles/ut1hp4c
Each time we try to read articles, it will take 2 seconds to respond.
Adding basic cache
Let's use Nitro Cache API to improve our read operations.
We can replace defineEventHandler
with `defineCachedEventHandler. We will use it on our read all articles API endpoint.
export default defineCachedEventHandler(() => getArticles(), {
maxAge: 60 * 60, // subsequent requests will be cached for 1 hour
});
We will also use cachedFunction
to cache our getArticle
function, so we cover both of them
const cachedGetArticle = cachedFunction(getArticle, {
maxAge: 60 * 60, // subsequent calls will be cached for 1 hour
})
export default defineEventHandler(async (event) => {
const id = getRouterParam(event, 'id') as string;
return cachedGetArticle(id);
});
Note that we can also use
defineCachedEventHandler
here, but it's to show how to deal with it.
If we try to read articles, it will take 2 seconds to respond, but if we try again, it will respond instantly, because the result is cached for 1 hour. However, if we try to publish a new article and hit refresh on the blog post listing page and ... Shoot, nothing happens! The new article doesn't appear in the list because the cache is still valid for the next hour.
How to invalidate your cache
Since we are in development mode, the cache is stored in .nuxt/cache
directory, let's see what's inside:
.nuxt/cache/
├── nitro/
│ ├── functions/
│ │ └── _/
│ │ └── zWL6TlhLDy.json
│ ├── handlers/
│ │ └── _/
│ │ └── apiarticles.vyRqF74NFr.json
We can see that they are two directories, one is functions
(for cachedFunction
) and one is handlers
(for defineCachedEventHandler
), pretty straightforward. Each of them contains a file with a (not so) random name. The file contains the result of the function or the handler. The entire path is the cache key. This is how Unstorage's file system driver works.
If we remove the files, the cache will be invalidated, and the next request will take 2 seconds to respond with the latest content. You can notice that the files are recreated with the same name. This is because by default, the cache key is generated from the function or the handler name, namespace and arguments. Since we are using the same function and handler name, the cache key is the same.
Simple approach
We know that Nitro cache is stored in the cache:nitro
storage, so we can use the useStorage
to get it and remove all the keys.
export default defineEventHandler((event) => {
// ...
// get the nitro cache storage
const cacheStorage = useStorage('cache:nitro');
const cachedKeys = await cacheStorage.getKeys();
// naively remove all cached content
await Promise.all(cachedKeys.map((key) => storage.removeItem(key)))
})
But wait, this is not a good idea!
We don't want to invalidate the entire cache, but only the cache for the blog post listing page.
Predictive approach
Let's use Cache API options in order to generate a predictable cache key. In the documentation, we can see that we have group
, name
and a getKey
function that takes the same arguments as the function. We will use them to generate a predictable cache key.
By default, group
is nitro/handlers
for defineCachedEventHandler
and nitro/functions
for cachedFunction
.
The final key is composed as follows: cache:<group>:<name>:<key>.json
, where <key>
is generated from the getKey
function.
Generate a predictable cache key
export default defineCachedEventHandler(() => getArticles(), {
maxAge: 60 * 60, // subsequent requests will be cached for 1 hour
// cache:blog:articles:all.json
group: 'blog',
name: 'articles',
getKey: () => 'all',
});
Remember that the
getKey
function has the same arguments as the function when usingcachedFunction
, and it receives the event when usingdefineCachedEventHandler
, which we can use to include query parameters in the cache key for example.
We can see how the cache is stored now in the file system:
.nuxt/cache/
├── blog/
│ ├── articles-id/
│ │ └── abcd.json
│ └── articles/
│ └── all.json
Invalidate the cache
export default defineEventHandler<{
body: {
title?: string;
content?: string;
};
}>(async (event) => {
assertMethod(event, 'POST');
const body = await readBody(event);
// validate body
// publish new article to backend
const article = await publishArticle({
id: (Math.random() + 1).toString(32).substring(2, 9),
publishedAt: new Date().toISOString(),
title: body.title,
content: body.content,
});
// invalidate the cache
const cacheStorage = useStorage('cache:blog');
await cacheStorage.removeItem('articles:all.json');
return article;
});
Much better! Now we invalidate only the cache that we want to invalidate.
We can now safely increase our cache expiration time, as we know that the cache will be invalidated when we publish or edit an article.
If you need to generate keys with multiple arguments, check the ohash package from the UnJS ecosystem!
Going further
Nitro stores the cache in memory on production environments by default. That means that if we restart the server, the cache will be lost or if we have a lot to cache, it can go out of memory. We can save the cache on the file system, but if you want to scale your application, you will have to share the cache between all the instances, which is not easy to do.
If you don't plan to scale, I recommend you to take a look at the LRU cache driver.
Let's setup the cache:blog
storage in nuxt config:
import lruCacheDriver from "unstorage/drivers/lru-cache";
export default defineNuxtConfig({
nitro: {
storage: {
'cache:blog': {
driver: lruCacheDriver({
maxSize: 1000,
}),
},
},
},
})
If you plan to scale, or to go worldwide, you will need to distribute your cache on the edge, so it's closer to your users. This can be done using Cloudflare Workers KV driver!
That's it! We have seen how to use Nitro Cache API to cache our read operations and how to invalidate the cache when we perform write operations. We have also seen how to use Unstorage to store our cache on the file system, in memory or in Cloudflare Workers KV.
I hope you enjoyed this article, and that it will help you to improve the performance of your applications! Beaware that cache is not a silver bullet, and it can be a source of problems if not used correctly.
Let me know if you have any questions or feedback in the comments or on Discord.
Demo
View on Stackblitz