Skip to main content
  1. Languages/
  2. Nodejs Guides/

Node.js Caching Masterclass: From In-Memory to Redis & Memcached

Jeff Taakey
Author
Jeff Taakey
21+ Year CTO & Multi-Cloud Architect.

If you are building high-traffic Node.js applications, you already know the golden rule: the fastest database query is the one you never make.

As we settle into 2026, the demand for sub-millisecond latency hasn’t just increased; it has become the baseline. The performance standards established throughout 2025 proved that users have zero patience for sluggish APIs. Whether you are building a real-time fintech dashboard or a high-concurrency e-commerce backend, caching is not an “optional optimization”—it is a fundamental architectural requirement.

In this guide, we are going deep into the three pillars of Node.js caching: In-Memory (Process-level), Redis, and Memcached. We won’t just look at syntax; we will look at strategy, implementation, and the specific trade-offs of each approach in a production environment.

Prerequisites & Environment Setup
#

Before we write code, let’s ensure your environment is ready. We are assuming you are running a modern Node.js version (v20 LTS or v22 Current).

You will need:

  1. Node.js: v20+ recommended.
  2. Docker: To spin up Redis and Memcached instances quickly without messy local installations.
  3. HTTP Client: Postman, curl, or VS Code Thunder Client for testing endpoints.

Project Initialization
#

Let’s set up a standard Express project structure. Open your terminal:

mkdir node-caching-strategies
cd node-caching-strategies
npm init -y
npm install express node-cache ioredis memjs
npm install -D nodemon

We will simulate a “slow” database operation in our examples to demonstrate the dramatic impact of caching.


1. The Low-Hanging Fruit: In-Memory Caching
#

In-memory caching stores data directly in the Node.js process’s heap. It is the fastest possible cache because it involves no network overhead—no TCP handshake, no serialization over the wire. It’s just a variable lookup.

When to use it?
#

  • Configuration data that rarely changes.
  • Small datasets (lookup tables).
  • Single-instance applications (monoliths).

Implementation with node-cache
#

While you could use a native JavaScript Map, node-cache handles Time-To-Live (TTL) and memory management automatically, preventing memory leaks.

Create a file named in-memory.js:

const express = require('express');
const NodeCache = require('node-cache');

const app = express();
const port = 3000;

// Initialize cache with a standard TTL of 10 seconds
const myCache = new NodeCache({ stdTTL: 10 });

// Simulate a slow database call (e.g., complex aggregation)
const simulateHeavyQuery = async () => {
  return new Promise((resolve) => {
    setTimeout(() => {
      resolve({ data: "Expensive Data", timestamp: Date.now() });
    }, 2000); // 2 second delay
  });
};

app.get('/data', async (req, res) => {
  const key = 'heavy_query_result';

  // 1. Check Cache
  const cachedData = myCache.get(key);
  
  if (cachedData) {
    console.log('⚡ Cache Hit');
    return res.json({ source: 'cache', ...cachedData });
  }

  // 2. Cache Miss - Query DB
  console.log('🐌 Cache Miss - Fetching from DB...');
  const result = await simulateHeavyQuery();

  // 3. Set Cache
  myCache.set(key, result);

  return res.json({ source: 'database', ...result });
});

app.listen(port, () => {
  console.log(`In-memory cache server running on port ${port}`);
});

The Trap: In-memory caching is stateful. If you deploy this API across a cluster of 4 instances (using PM2 or Kubernetes), each instance has its own empty cache. User A might hit Instance 1 (cache miss), and User B might hit Instance 2 (cache miss again). This leads to cache inconsistency and wasted memory.


2. The Industry Standard: Redis
#

Redis (Remote Dictionary Server) is the de facto standard for distributed caching in the Node.js ecosystem. Unlike in-memory caching, Redis runs as a separate service. All your Node.js instances talk to this single source of truth.

Why Redis?
#

  • Data Structures: Supports Strings, Hashes, Lists, Sets.
  • Persistence: Can save data to disk (AOF/RDB) so a reboot doesn’t wipe the cache.
  • Pub/Sub: Excellent for real-time features.

Setting up Redis with Docker
#

Create a docker-compose.yml file in your root directory:

version: '3.8'
services:
  redis:
    image: redis:7-alpine
    ports:
      - "6379:6379"

Run docker-compose up -d to start it.

Implementation with ioredis
#

We prefer ioredis over the standard redis client for its robust error handling and Promise support.

Create redis-cache.js:

const express = require('express');
const Redis = require('ioredis');

const app = express();
const port = 3001;

// Connect to local Redis instance
const redis = new Redis({
  host: 'localhost',
  port: 6379,
  // Retry strategy usually goes here for production
});

const simulateHeavyQuery = async () => {
  return new Promise((resolve) => setTimeout(() => {
    resolve({ id: 101, status: "active", revenue: 9999 });
  }, 2000));
};

app.get('/stats', async (req, res) => {
  const cacheKey = 'user:stats:101';

  try {
    // 1. Try to get data from Redis
    const cachedResult = await redis.get(cacheKey);

    if (cachedResult) {
      console.log('⚡ Redis Hit');
      // Redis stores strings, so we must parse JSON
      return res.json({ source: 'redis', data: JSON.parse(cachedResult) });
    }

    console.log('🐌 Redis Miss');
    const result = await simulateHeavyQuery();

    // 2. Save to Redis with Expiry (SETEX equivalent)
    // 'EX' stands for seconds. We cache for 60 seconds.
    await redis.set(cacheKey, JSON.stringify(result), 'EX', 60);

    return res.json({ source: 'database', data: result });

  } catch (error) {
    console.error('Redis Error:', error);
    // Fallback to DB if Redis fails to ensure availability
    const result = await simulateHeavyQuery();
    return res.json({ source: 'database_fallback', data: result });
  }
});

app.listen(port, () => {
  console.log(`Redis cache server running on port ${port}`);
});

3. The Lightweight Contender: Memcached
#

Before Redis took over the world, Memcached was king. It is still heavily used by giants like Meta and Twitter for specific use cases. It is a pure, multithreaded key-value store.

Redis vs. Memcached (The Nuance)
#

Memcached is multithreaded, whereas Redis is (mostly) single-threaded. For extremely high throughput of small, simple static data, Memcached can sometimes outperform Redis vertically. However, it lacks advanced data structures.

Setting up Memcached
#

Add this to your existing docker-compose.yml:

  memcached:
    image: memcached:alpine
    ports:
      - "11211:11211"

Restart your containers: docker-compose up -d.

Implementation with memjs
#

Create memcached-cache.js:

const express = require('express');
const memjs = require('memjs');

const app = express();
const port = 3002;

// Connect to Memcached
const mc = memjs.Client.create('localhost:11211');

const simulateHeavyQuery = async () => {
  return new Promise((resolve) => setTimeout(() => {
    resolve({ page: "home", content: "<html>...</html>" });
  }, 2000));
};

app.get('/page', async (req, res) => {
  const key = 'page_home_content';

  // 1. Get from Memcached
  // memjs uses callbacks or promises. Let's use object destructuring.
  const { value } = await mc.get(key);

  if (value) {
    console.log('⚡ Memcached Hit');
    // Value returns as a Buffer, need to convert to String then JSON
    return res.json({ source: 'memcached', data: JSON.parse(value.toString()) });
  }

  console.log('🐌 Memcached Miss');
  const result = await simulateHeavyQuery();

  // 2. Set to Memcached
  // options: { expires: seconds }
  await mc.set(key, JSON.stringify(result), { expires: 60 });

  return res.json({ source: 'database', data: result });
});

app.listen(port, () => {
  console.log(`Memcached server running on port ${port}`);
});

Visualizing the Strategy
#

Regardless of the tool (Redis or Memcached), the architectural pattern remains the “Cache-Aside” (or Lazy Loading) strategy. Here is how the data flows in a typical production environment.

flowchart TD Client([User / Client]) API[Node.js API] Cache[(Cache Layer\nRedis/Memcached)] DB[(Primary DB\nPostgres/Mongo)] Client -- Request Data --> API API -- 1. Check Key --> Cache Cache -- 2a. Hit (Data Exists) --> API API -- 3a. Return Cached Data --> Client Cache -- 2b. Miss (No Data) --> API API -- 3b. Query Database --> DB DB -- 4. Return Payload --> API API -- 5. Write to Cache (+TTL) --> Cache API -- 6. Return Fresh Data --> Client style Client fill:#f9f,stroke:#333,stroke-width:2px style API fill:#bbf,stroke:#333,stroke-width:2px style Cache fill:#d4edda,stroke:#28a745,stroke-width:2px style DB fill:#fff3cd,stroke:#ffc107,stroke-width:2px

Comparison: Choosing the Right Tool
#

Choosing between these three isn’t about which is “best,” but which fits your architecture.

Feature In-Memory (Node-Cache) Redis Memcached
Speed 🚀 Ultra Fast (Nanoseconds) ⚡ Very Fast (Network Latency) ⚡ Very Fast (Network Latency)
Scalability Low (Bound to single process) High (Cluster, Sentinel) High (Easy horizontal scaling)
Data Types Any JS Object Strings, Hashes, Lists, Sets Strings/Binary only
Persistence None (Lost on restart) Yes (RDB / AOF) None
Complexity Zero Medium Low
Best For Local configs, tiny apps Session stores, queues, complex caching HTML fragments, simple object caching

Advanced Patterns & Pitfalls
#

As a senior developer, simply get and set isn’t enough. You need to be aware of these production killers.

1. Cache Stampede (The Thundering Herd)
#

Imagine a popular cache key expires exactly when 1,000 users request it simultaneously.

  • Result: All 1,000 requests miss the cache. All 1,000 hit your database at the exact same millisecond.
  • Solution: Use Probabilistic Early Expiration (refresh the cache slightly before it actually expires) or implement a locking mechanism (mutex) so only one process updates the cache.

2. Cache Penetration
#

This happens when malicious users request keys that don’t exist in your database (e.g., id: -1). The cache misses, the DB checks and finds nothing, and the cycle repeats, hammering your DB.

  • Solution: Cache the “null” result for a short time (e.g., 30 seconds) or use a Bloom Filter to check if an ID exists before hitting the cache/DB.

3. Serialization Overhead
#

In Node.js, JSON.stringify and JSON.parse are synchronous and CPU-intensive. If you are caching massive objects (1MB+), parsing them blocks the Event Loop.

  • Optimization: For large datasets, consider using optimized serialization libraries (like msgpack) or storing data as Hashes in Redis so you only retrieve the specific fields you need.

Conclusion
#

In the landscape of 2026, caching is your first line of defense against latency.

  • Start with Redis if you need a robust, general-purpose distributed cache. It is the safest bet for 95% of Node.js applications.
  • Use In-Memory caching sparingly, strictly for static configuration or within serverless functions where external connections add too much latency.
  • Consider Memcached only if you are managing a massive scale, read-heavy system where multi-threaded simplicity outweighs feature richness.

Next Steps: Try implementing the Cache Stampede protection logic in the Redis example above using a simple lock key (SET NX). It’s a great exercise to harden your system design skills.

Happy Coding!