Skip to content

Microservice Caching Guide

The MicroserviceClientService supports Redis-based caching for microservice responses. This feature reduces network calls, improves response times, and decreases load on downstream services.

  • Automatic cache key generation based on command + payload hash
  • Configurable TTL (Time To Live) per request
  • Cache hit/miss logging for monitoring
  • Graceful degradation — cache errors don’t break requests
  • Payload normalization — ensures consistent cache keys
  • Custom key prefixes for better organization

// No caching - always calls the microservice
const resource = await this.microserviceClient.sendWithContext<Resource>(
this.logger,
this.dataOwnerClient,
{ cmd: AppMicroservice.DataOwner.cmd.GetResourceById },
{ id: resourceId },
null,
);
// Cache for 5 minutes (default TTL)
const resource = await this.microserviceClient.sendWithContext<Resource>(
this.logger,
this.dataOwnerClient,
{ cmd: AppMicroservice.DataOwner.cmd.GetResourceById },
{ id: resourceId },
null,
{ enabled: true }, // ✅ Enable caching
);
// Cache for 10 minutes (600 seconds)
const resource = await this.microserviceClient.sendWithContext<Resource>(
this.logger,
this.dataOwnerClient,
{ cmd: AppMicroservice.DataOwner.cmd.GetResourceById },
{ id: resourceId },
null,
{
enabled: true,
ttl: 600, // ✅ 10 minutes
},
);
// Use custom prefix for better cache organization
const resource = await this.microserviceClient.sendWithContext<Resource>(
this.logger,
this.dataOwnerClient,
{ cmd: AppMicroservice.DataOwner.cmd.GetResourceById },
{ id: resourceId },
null,
{
enabled: true,
ttl: 300,
keyPrefix: 'resource:fetch', // ✅ Custom prefix
},
);
// Cache key: microservice:resource:fetch:<payload_hash>

Cache keys are automatically generated using this pattern:

microservice:<prefix>:<payload_hash>
// Command: 'data-owner.resource.getById'
// Payload: { id: '123' }
// Key: microservice:data-owner:resource:getById:5f8d0e4e6c1b4e2a3f4d5c6b
// With custom prefix: 'resource:data'
// Key: microservice:resource:data:5f8d0e4e6c1b4e2a3f4d5c6b
  1. Normalization: Payload keys are sorted alphabetically, undefined values removed
  2. Hashing: SHA-256 hash of normalized JSON
  3. Truncation: First 24 characters used for brevity

Same payload = Same hash (deterministic)

// These generate the SAME cache key:
{ id: '123', name: 'John' }
{ name: 'John', id: '123' } // ✅ Order doesn't matter
// These generate DIFFERENT cache keys:
{ id: '123' }
{ id: '456' } // ❌ Different values

Example 1: Fetching Resource Data (Frequently Accessed)

Section titled “Example 1: Fetching Resource Data (Frequently Accessed)”
@Injectable({ scope: Scope.REQUEST })
export class ResourcesService {
constructor(
private readonly logger: LogsService,
private readonly microserviceClient: MicroserviceClientService,
@Inject(AppMicroservice.SystemAdmin.name)
private readonly usersService: ClientProxy,
) {}
async findOne(id: string): Promise<Resource> {
const resource = await this.resourceRepository.findOne(id);
// ✅ Cache user data for 10 minutes (frequently accessed, rarely changes)
if (resource.created_by) {
resource.created_by = await this.microserviceClient.sendWithContext<Partial<IUser>>(
this.logger,
this.usersService,
{ cmd: AppMicroservice.SystemAdmin.UserResources.cmd.FindCreatedBy },
{ id: resource.created_by?.toString() || '' },
null,
{
enabled: true,
ttl: 600, // 10 minutes
keyPrefix: 'user:profile',
},
);
}
return resource;
}
}
@Injectable({ scope: Scope.REQUEST })
export class EngagementsService {
async getCoverageScheme(schemeCode: string) {
// ✅ Cache master data for 1 hour (rarely changes)
const scheme = await this.microserviceClient.sendWithContext<CoverageScheme>(
this.logger,
this.masterDataClient,
{ cmd: AppMicroservice.MasterData.cmd.GetCoverageSchemeByCode },
{ scheme_code: schemeCode },
null,
{
enabled: true,
ttl: 3600, // 1 hour
keyPrefix: 'masterdata:coverage',
},
);
return scheme;
}
}
@Injectable({ scope: Scope.REQUEST })
export class ResourcesService {
async getResourceAttachments(attachmentIds: string[]) {
// ✅ Cache file URLs for 5 minutes (short-lived due to security)
const attachments = await this.microserviceClient.sendWithContext<AttachmentPayload[]>(
this.logger,
this.storagesService,
{ cmd: AppMicroservice.Storage.cmd.GetPath },
{ images: attachmentIds },
[],
{
enabled: true,
ttl: 300, // 5 minutes
keyPrefix: 'storage:urls',
},
);
return attachments;
}
}
@Injectable({ scope: Scope.REQUEST })
export class OrdersService {
async getCurrentQueue() {
// ❌ Don't cache real-time data
const queue = await this.microserviceClient.sendWithContext<QueueData>(
this.logger,
this.queueService,
{ cmd: 'get_current_queue' },
{ department_id: 'OPERATIONS' },
null,
// No cache options - always fetch fresh data
);
return queue;
}
}

Data TypeTTL SuggestionReason
User profiles10-30 minutesRarely change during session
Master data (titles, regions, coverage schemes)1-24 hoursVery rarely change
Configuration data1 hourStatic during operation
Lookup tables30 minutes - 1 hourInfrequently updated
File/Attachment URLs5-10 minutesValid for short period
Computed aggregations (if idempotent)5-15 minutesExpensive to calculate
Data TypeWhy Not Cache
Real-time data (queue status, live counts)Data changes constantly
Transactional data (orders, payments)Must always be current
User session dataAlready in Redis via JWT
Sensitive data (passwords, tokens)Security risk
Large datasets (paginated lists)Memory overhead
POST/PUT/DELETE operationsState-changing operations

The caching system provides detailed structured logs:

// Cache Hit (data retrieved from cache)
{
"level": "info",
"message": "[Cache Hit] Retrieved cached data for command 'get_resource_by_id'",
"context": {
"action": "CACHE_HIT_GET_RESOURCE_BY_ID",
"correlation_id": "abc-123",
"cache_key": "microservice:get_resource:5f8d0e4e6c1b"
}
}
// Cache Miss (not in cache, calling microservice)
{
"level": "debug",
"message": "[Cache Miss] No cached data found for command 'get_resource_by_id'",
"context": {
"action": "CACHE_MISS_GET_RESOURCE_BY_ID",
"correlation_id": "abc-123",
"cache_key": "microservice:get_resource:5f8d0e4e6c1b"
}
}
// Cache Set (storing response in cache)
{
"level": "debug",
"message": "[Cache Set] Cached response for command 'get_resource_by_id'",
"context": {
"action": "CACHE_SET_GET_RESOURCE_BY_ID",
"correlation_id": "abc-123",
"cache_key": "microservice:get_resource:5f8d0e4e6c1b",
"ttl_seconds": 300
}
}
// Cache Error (Redis error, but request continues)
{
"level": "warn",
"message": "[Cache Error] Failed to retrieve cache for command 'get_resource_by_id'",
"context": {
"action": "CACHE_ERROR_GET_RESOURCE_BY_ID",
"correlation_id": "abc-123",
"error": "Connection timeout"
}
}
  1. Cache Hit Rate: (Cache Hits) / (Cache Hits + Cache Misses) * 100%
  2. Average Response Time: Compare cached vs uncached requests
  3. Cache Size: Monitor Redis memory usage
  4. TTL Effectiveness: Check if data expires before reuse

// ❌ BAD: TTL too long for frequently changing data
{ enabled: true, ttl: 86400 } // 24 hours for user status
// ✅ GOOD: Reasonable TTL based on data volatility
{ enabled: true, ttl: 300 } // 5 minutes for user status
// ❌ BAD: Generic prefix
{ enabled: true, keyPrefix: 'data' }
// ✅ GOOD: Specific prefix for easy debugging
{ enabled: true, keyPrefix: 'resource:coverage:schemes' }

The system automatically skips caching if:

  • Result is null
  • Result is undefined
// If microservice returns null, it won't be cached
const resource = await sendWithContext(..., null, { enabled: true });
// null result → not cached → next request will call microservice again ✅

Cache errors never break your requests. The system:

  • Logs the error
  • Continues with microservice call
  • Returns the microservice result
// Even if Redis is down, this works:
const resource = await sendWithContext(..., { enabled: true });
// Redis down → logs warning → calls microservice → returns result ✅
// ✅ GOOD: Cache GET operations
{ cmd: 'get_resource_by_id' } // Cache this
// ❌ BAD: Don't cache state-changing operations
{ cmd: 'update_resource' } // Don't cache this
{ cmd: 'create_engagement' } // Don't cache this

If you need to manually clear cache:

import { Inject } from '@nestjs/common';
import Redis from 'ioredis';
import { AppMicroservice } from '@lib/common/enum/app-microservice.enum';
@Injectable()
export class SomeService {
constructor(
@Inject(AppMicroservice.Redis.name)
private readonly redisClient: Redis,
) {}
async invalidateResourceCache(resourceId: string) {
// Delete specific cache key pattern
const pattern = 'microservice:get_resource:*';
const keys = await this.redisClient.keys(pattern);
if (keys.length > 0) {
await this.redisClient.del(...keys);
}
}
}

All cached data automatically expires after TTL. No manual cleanup needed.


Check:

  1. ✅ Redis is running: redis-cli ping → should return PONG
  2. cacheOptions.enabled is true
  3. ✅ Result is not null or undefined
  4. ✅ Check logs for cache errors

Solutions:

  1. Reduce TTL values
  2. Use more specific cache keys (avoid caching large datasets)
  3. Monitor Redis memory: redis-cli info memory
  4. Consider cache size limits

Solutions:

  1. Reduce TTL for volatile data
  2. Implement cache invalidation on updates
  3. Use cache only for truly immutable/slow-changing data

Example: Fetching Resource Profile

ScenarioResponse TimeSavings
No Cache (Microservice call)45ms-
Cache Hit (Redis)3ms93% faster

Benefit: For 1000 requests/minute, caching saves ~42 seconds of cumulative response time.


When to use caching:

  • ✅ Read-only operations
  • ✅ Data changes infrequently
  • ✅ High read volume
  • ✅ Acceptable slight staleness

When NOT to use caching:

  • ❌ Real-time data
  • ❌ Transactional operations
  • ❌ Personalized/user-specific data (unless scoped properly)
  • ❌ Large datasets

Remember: Caching is a performance optimization, not a replacement for proper service design.