A high-performance, strongly-typed caching library for Node.js. Supports in-memory LRU and TTL caches, metadata, and persistent backends (SQLite, Redis). Designed for reliability, flexibility, and modern TypeScript/ESM workflows.
- ā”ļø Fast in-memory LRU and TTL caches
- šļø Persistent cache with SQLite and Redis backends
- š·ļø Metadata support for all entries
- š Size and entry count limits
- š§āš» 100% TypeScript, ESM & CJS compatible
- š§Ŗ Simple, robust API for all Node.js projects
- node-cache
npm i @stephen-shopopop/cacheFor Redis support, you also need to install iovalkey:
npm i iovalkeyThis library requires no special configuration for basic usage.
- Node.js >= 20.17.0
- Compatible with both ESM (
import) and CommonJS (require) - TypeScript types included
- SQLiteCacheStore available on Node.js > 20.x
- RedisCacheStore requires
iovalkeypackage to be installed separately
import { LRUCache } from '@stephen-shopopop/cache';const { LRUCache } = require('@stephen-shopopop/cache');Full API documentation is available here: š Generated Docs
A fast in-memory Least Recently Used (LRU) cache. Removes the least recently used item when the maximum size is reached.
-
Constructor:
new LRUCache<K, V>({ maxSize?: number })
-
Methods:
set(key, value): Add or update a valueget(key): Retrieve a valuedelete(key): Remove a keyclear(): Clear the cachehas(key): Check if a key existssize: Number of items
LRU cache with automatic expiration (TTL) for entries. Combines LRU eviction and time-based expiration.
-
Constructor:
new LRUCacheWithTTL<K, V>({ maxSize?: number, ttl?: number, stayAlive?: boolean, cleanupInterval?: number })
-
Methods:
set(key, value, ttl?): Add a value with optional TTLget(key): Retrieve a value (or undefined if expired)delete(key): Remove a keyclear(): Clear the cachehas(key): Check if a key existssize: Number of items
In-memory cache with LRU policy, supports max size, max entry size, max number of entries, and associated metadata.
-
Constructor:
new MemoryCacheStore<K, Metadata>({ maxCount?: number, maxEntrySize?: number, maxSize?: number })
-
Methods:
set(key, value, metadata?): Add a value (string or Buffer) with metadataget(key): Retrieve{ value, metadata, size }or undefineddelete(key): Remove a keyclear(): Clear the cachehas(key): Check if a key existssize: Number of itemsbyteSize: Total size in bytes
Persistent cache using SQLite as backend, supports metadata, TTL, entry size and count limits.
-
Constructor:
new SQLiteCacheStore<Metadata>({ filename?: string, maxEntrySize?: number, maxCount?: number, timeout?: number })
-
Methods:
-
set(key, value, metadata?, ttl?): Add a value (string or Buffer) with metadata and optional TTL -
get(key): Retrieve{ value, metadata }or undefined -
delete(key): Remove a key -
size: Number of items -
close(): Close the database connection
-
Note: SQLiteCacheStore methods may throw errors related to SQLite (connection, query, file access, etc.). It is the responsibility of the user to handle these errors (e.g., with try/catch) according to their application's needs. The library does not catch or wrap SQLite errors by design.
Distributed cache based on Redis, supports persistence, TTL, metadata, and entry size/count limits.
Prerequisites: Install
iovalkeypackage:npm i iovalkey
-
Constructor:
new RedisCacheStore<Metadata>({ url?: string, maxEntrySize?: number, maxCount?: number, ttl?: number, namespace?: string, redisOptions?: object })
-
Methods:
set(key, value, metadata?, ttl?): Add a value (string or Buffer) with optional metadata and TTLget(key): Retrieve{ value, metadata }or undefineddelete(key): Remove a keyclose(): Close the Redis connection
Note: RedisCacheStore requires an accessible Redis server and the
iovalkeypackage. Connection or operation errors are thrown as-is.
-
maxSize: max number of items (LRUCache, LRUCacheWithTTL), max total size in bytes (MemoryCacheStore) -
maxCount: max number of entries (MemoryCacheStore) -
maxEntrySize: max size of a single entry (MemoryCacheStore) -
ttl: time to live in ms (LRUCacheWithTTL) -
cleanupInterval: automatic cleanup interval (LRUCacheWithTTL) -
stayAlive: keep the timer active (LRUCacheWithTTL) -
filename: SQLite database file name (SQLiteCacheStore) -
timeout: SQLite operation timeout in ms (SQLiteCacheStore) -
url: Redis server URL (RedisCacheStore) -
namespace: Key namespace for RedisCacheStore (RedisCacheStore) -
redisOptions: Additional options for Redis client (RedisCacheStore)
import { LRUCache, LRUCacheWithTTL, MemoryCacheStore, RedisCacheStore } from '@stephen-shopopop/cache';
const lru = new LRUCache({ maxSize: 100 });
lru.set('a', 1);
const lruTtl = new LRUCacheWithTTL({ maxSize: 100, ttl: 60000 });
lruTtl.set('a', 1);
const mem = new MemoryCacheStore({ maxCount: 10, maxEntrySize: 1024 });
mem.set('a', 'value', { meta: 123 });
const sqlite = new SQLiteCacheStore({ filename: 'cache.db', maxEntrySize: 1024 });
sqlite.set('a', 'value', { meta: 123 }, 60000);
const result = sqlite.get('a');
const redis = new RedisCacheStore({ url: 'redis://localhost:6379', namespace: 'mycache:' });
await redis.set('a', 'value', { meta: 123 }, 60000);
const redisResult = await redis.get('a');[Most Recent] [ ... ] [Least Recent]
head <-> node <-> ... <-> tail
| |
+---> {key,value} +---> {key,value}
Eviction: when maxSize is reached, 'tail' is removed (least recently used)
Access: accessed node is moved to 'head' (most recently used)+-----------------------------+
| MemoryCacheStore |
+-----------------------------+
| #data: LRUCache<K, Value> |
| #maxCount |
| #maxEntrySize |
| #maxSize |
| #size |
+-----------------------------+
| |
| +---> [maxCount, maxEntrySize, maxSize] constraints
|
+---> LRUCache (internal):
head <-> node <-> ... <-> tail
(evicts least recently used)
Each entry:
{
key: K,
value: string | Buffer,
metadata: object,
size: number (bytes)
}
Eviction: when maxCount or maxSize is reached, oldest/oversized entries are removed.
+-----------------------------+
| SQLiteCacheStore |
+-----------------------------+
| #db: SQLite database |
| #maxCount |
| #maxEntrySize |
| #timeout |
+-----------------------------+
|
+---> [SQLite file: cache.db]
|
+---> Table: cache_entries
+-------------------------------+
| key | value | metadata | ttl |
+-------------------------------+
Each entry:
{
key: string,
value: string | Buffer,
metadata: object,
ttl: number (ms, optional)
}
Eviction: when maxCount or maxEntrySize is reached, or TTL expires, entries are deleted from the table.
Persistence: all data is stored on disk in the SQLite file.
+-----------------------------+
| LRUCacheWithTTL |
+-----------------------------+
| #data: LRUCache<K, Entry> |
| #ttl |
| #cleanupInterval |
| #timer |
+-----------------------------+
|
+---> LRUCache (internal):
head <-> node <-> ... <-> tail
(evicts least recently used)
Each entry:
{
key: K,
value: V,
expiresAt: number (timestamp, ms)
}
Expiration: entries are removed when their TTL expires (checked on access or by cleanup timer).
Eviction: LRU policy applies when maxSize is reached.
+-----------------------------+
| RedisCacheStore |
+-----------------------------+
| #client: Redis client |
| #maxSize |
| #maxCount |
| #maxEntrySize |
| #ttl |
+-----------------------------+
|
+---> [Redis server]
|
+---> Key: {keyPrefix}{key}
Value: JSON.stringify({ value, metadata })
TTL: Redis expire (ms)
Each entry:
{
key: string,
value: string | Buffer,
metadata: object,
ttl: number (ms, optional)
}
Expiration: handled by Redis via TTL.
Eviction: handled by Redis according to its memory policy.
Persistence: depends on Redis configuration (AOF, RDB, etc.).
- API response caching: Reduce latency and external API calls by caching HTTP responses in memory or on disk.
- Session storage: Store user sessions or tokens with TTL for automatic expiration.
- File or image cache: Cache processed files, images, or buffers with size limits.
- Metadata tagging: Attach custom metadata (timestamps, user info, tags) to each cache entry for advanced logic.
- Persistent job queue: Use SQLiteCacheStore to persist jobs or tasks between server restarts.
- Rate limiting: Track and limit user actions over time using TTL-based caches.
- Temporary feature flags: Store and expire feature flags or toggles dynamically.
Note: Results below are indicative and may vary depending on your hardware and Node.js version. Run
npm run benchfor up-to-date results on your machine.
| Store | set (ops/s) | get (ops/s) | delete (ops/s) | complex workflow (ops/s) |
|---|---|---|---|---|
| LRUCache | 1,220,000 | 2,030,000 | 1,190,000 | 675,000 |
| LRUCacheWithTTL | 1,060,000 | 1,830,000 | 1,030,000 | 615,000 |
| MemoryCacheStore | 1,120,000 | 1,910,000 | 182,000 | 305,000 |
| RedisCacheStore | 28,000 | 39,000 | 33,000 | 16,500 |
| SQLiteCacheStore (mem) | 121,000 | 442,000 | 141,000 | 52,500 |
| SQLiteCacheStore (file) | 51,000 | 49,000 | 137,000 | 46,500 |
Bench run on Apple M1, Node.js 24.7.0, npm run bench ā complex workflow = set, get, update, delete, hit/miss, TTL, metadata.
How are ops/s calculated?
For each operation, the benchmark reports the average time per operation (e.g. 1.87 µs/iter).
To get the number of operations per second (ops/s), we use:
ops/s = 1 / (average time per operation in seconds)
Example: if the bench reports 856.45 ns/iter, then:
- 856.45 ns = 0.00000085645 seconds
- ops/s = 1 / 0.00000085645 ā 1,168,000
All values in the table are calculated this way and rounded for readability.
Each backend has different performance characteristics and is suited for different use cases:
| Backend | Typical use case | Max ops/s (indicative) | Latency (typical) | Notes |
|---|---|---|---|---|
| LRUCache | Hot-path, ultra-fast in-memory | >1,200,000 | <2µs | No persistence, no TTL |
| LRUCacheWithTTL | In-memory with expiration | >1,000,000 | <2µs | TTL adds slight overhead |
| MemoryCacheStore | In-memory, metadata, size limit | ~1,100,000 | <2µs | Metadata, size/count limits |
| SQLiteCacheStore (mem) | Fast, ephemeral persistence | ~120,000 | ~10µs | Data lost on restart |
| SQLiteCacheStore (file) | Durable persistence | ~50,000 | ~20ā50µs | Disk I/O, best for cold data |
| RedisCacheStore | Distributed, persistent cache | ~27,000 | ~40ā100µs | Network I/O, Redis server, async API |
Guidance:
- Use LRUCache/LRUCacheWithTTL for ultra-low-latency, high-throughput scenarios (API cache, session, etc.).
- Use MemoryCacheStore if you need metadata or strict size limits.
- Use SQLiteCacheStore (memory) for fast, non-persistent cache across processes.
- Use SQLiteCacheStore (file) for persistent cache, but expect higher latency due to disk I/O.
- Use RedisCacheStore for distributed caching, multi-process sharing, and when Redis features or persistence are needed.
Numbers are indicative, measured on Apple M1, Node.js 24.x. Always benchmark on your own hardware for production sizing.
SQLite is a disk-based database. Even with optimizations (WAL, memory temp store), disk I/O and serialization add latency compared to pure in-memory caches. For ultra-low-latency needs, use LRUCache or MemoryCacheStore.
You can instrument the library using diagnostic_channel (Node.js). Future versions may provide built-in hooks. For now, you can wrap cache methods or use diagnostic_channel in your own code to publish events on cache operations.
This warning is from Node.js itself (v20+). SQLite support is stable for most use cases, but the API may change in future Node.js versions. Follow Node.js release notes for updates.
All errors from SQLite (connection, query, file access) are thrown as-is. You should use try/catch around your cache operations and handle errors according to your applicationās needs.
Yes, but persistent caches (SQLiteCacheStore with file) may not be suitable for ephemeral file systems. Use in-memory caches for stateless/serverless workloads.
Want to contribute to this library? Thank you! Hereās what you need to know to get started:
- Node.js >= 20.17.0
- pnpm or npm (package manager)
- TypeScript (strictly typed everywhere)
git clone https://github.com/stephen-shopopop/node-cache.git
cd node-cache
pnpm install # or npm installnpm run build: build TypeScript (ESM + CJS via tsup)npm run test: run all tests (node:test)npm run lint: check lint (biome)npm run format: format codenpm run check: type checknpm run bench: run benchmarksnpm run docs: generate documentation (TypeDoc)
src/library/: main source code (all cache classes)src/index.ts: entry pointtest/: all unit tests (node:test)bench/: benchmarks (mitata)docs/: generated documentation
- Follow the style: semicolons, single quotes, arrow functions for callbacks
- Avoid nested ternary operators
- Always add tests for any new feature or bugfix (see example below)
- Use clear, conventional commit messages (see Conventional Commits)
- PRs and code reviews are welcome in French or English
import test from 'node:test';
import { LRUCache } from '../src/library/LRUCache.js';
test('LRUCache basic set/get', (t: TestContext) => {
// Arrange
const cache = new LRUCache({ maxSize: 2 });
// Act
cache.set('a', 1);
// Assert
t.assert.strictEqual(cache.get('a'), 1);
});- Make sure all tests pass (
npm run test) - Check lint and formatting (
npm run lint && npm run format) - Check coverage (
npm run coverage) - Add/complete documentation if needed
- Clearly describe your contribution in the PR
- Use clear, conventional commit messages
- If your change impacts users, update the README and/or documentation
- Releases are tagged and published manually by the maintainer. If you want to help with releases, open an issue or PR.
- Open an issue or contact the maintainer via GitHub.
- See pull requests for ongoing work.