Microservice Caching Guide
Overview
Section titled “Overview”The MicroserviceClientService supports Redis-based caching for microservice responses. This feature reduces network calls, improves response times, and decreases load on downstream services.
Features
Section titled “Features”- ✅ Automatic cache key generation based on command + payload hash
- ✅ Configurable TTL (Time To Live) per request
- ✅ Cache hit/miss logging for monitoring
- ✅ Graceful degradation — cache errors don’t break requests
- ✅ Payload normalization — ensures consistent cache keys
- ✅ Custom key prefixes for better organization
Basic Usage
Section titled “Basic Usage”Without Caching (Default Behavior)
Section titled “Without Caching (Default Behavior)”// No caching - always calls the microserviceconst resource = await this.microserviceClient.sendWithContext<Resource>( this.logger, this.dataOwnerClient, { cmd: AppMicroservice.DataOwner.cmd.GetResourceById }, { id: resourceId }, null,);With Caching Enabled
Section titled “With Caching Enabled”// Cache for 5 minutes (default TTL)const resource = await this.microserviceClient.sendWithContext<Resource>( this.logger, this.dataOwnerClient, { cmd: AppMicroservice.DataOwner.cmd.GetResourceById }, { id: resourceId }, null, { enabled: true }, // ✅ Enable caching);Custom TTL
Section titled “Custom TTL”// Cache for 10 minutes (600 seconds)const resource = await this.microserviceClient.sendWithContext<Resource>( this.logger, this.dataOwnerClient, { cmd: AppMicroservice.DataOwner.cmd.GetResourceById }, { id: resourceId }, null, { enabled: true, ttl: 600, // ✅ 10 minutes },);Custom Key Prefix
Section titled “Custom Key Prefix”// Use custom prefix for better cache organizationconst resource = await this.microserviceClient.sendWithContext<Resource>( this.logger, this.dataOwnerClient, { cmd: AppMicroservice.DataOwner.cmd.GetResourceById }, { id: resourceId }, null, { enabled: true, ttl: 300, keyPrefix: 'resource:fetch', // ✅ Custom prefix },);// Cache key: microservice:resource:fetch:<payload_hash>Cache Key Structure
Section titled “Cache Key Structure”Cache keys are automatically generated using this pattern:
microservice:<prefix>:<payload_hash>Examples
Section titled “Examples”// Command: 'data-owner.resource.getById'// Payload: { id: '123' }// Key: microservice:data-owner:resource:getById:5f8d0e4e6c1b4e2a3f4d5c6b
// With custom prefix: 'resource:data'// Key: microservice:resource:data:5f8d0e4e6c1b4e2a3f4d5c6bHow Payload Hash Works
Section titled “How Payload Hash Works”- Normalization: Payload keys are sorted alphabetically, undefined values removed
- Hashing: SHA-256 hash of normalized JSON
- Truncation: First 24 characters used for brevity
Same payload = Same hash (deterministic)
// These generate the SAME cache key:{ id: '123', name: 'John' }{ name: 'John', id: '123' } // ✅ Order doesn't matter
// These generate DIFFERENT cache keys:{ id: '123' }{ id: '456' } // ❌ Different valuesReal-World Examples
Section titled “Real-World Examples”Example 1: Fetching Resource Data (Frequently Accessed)
Section titled “Example 1: Fetching Resource Data (Frequently Accessed)”@Injectable({ scope: Scope.REQUEST })export class ResourcesService { constructor( private readonly logger: LogsService, private readonly microserviceClient: MicroserviceClientService, @Inject(AppMicroservice.SystemAdmin.name) private readonly usersService: ClientProxy, ) {}
async findOne(id: string): Promise<Resource> { const resource = await this.resourceRepository.findOne(id);
// ✅ Cache user data for 10 minutes (frequently accessed, rarely changes) if (resource.created_by) { resource.created_by = await this.microserviceClient.sendWithContext<Partial<IUser>>( this.logger, this.usersService, { cmd: AppMicroservice.SystemAdmin.UserResources.cmd.FindCreatedBy }, { id: resource.created_by?.toString() || '' }, null, { enabled: true, ttl: 600, // 10 minutes keyPrefix: 'user:profile', }, ); }
return resource; }}Example 2: Master Data (Rarely Changes)
Section titled “Example 2: Master Data (Rarely Changes)”@Injectable({ scope: Scope.REQUEST })export class EngagementsService { async getCoverageScheme(schemeCode: string) { // ✅ Cache master data for 1 hour (rarely changes) const scheme = await this.microserviceClient.sendWithContext<CoverageScheme>( this.logger, this.masterDataClient, { cmd: AppMicroservice.MasterData.cmd.GetCoverageSchemeByCode }, { scheme_code: schemeCode }, null, { enabled: true, ttl: 3600, // 1 hour keyPrefix: 'masterdata:coverage', }, );
return scheme; }}Example 3: Storage URLs (Short-lived)
Section titled “Example 3: Storage URLs (Short-lived)”@Injectable({ scope: Scope.REQUEST })export class ResourcesService { async getResourceAttachments(attachmentIds: string[]) { // ✅ Cache file URLs for 5 minutes (short-lived due to security) const attachments = await this.microserviceClient.sendWithContext<AttachmentPayload[]>( this.logger, this.storagesService, { cmd: AppMicroservice.Storage.cmd.GetPath }, { images: attachmentIds }, [], { enabled: true, ttl: 300, // 5 minutes keyPrefix: 'storage:urls', }, );
return attachments; }}Example 4: Dynamic Data (No Caching)
Section titled “Example 4: Dynamic Data (No Caching)”@Injectable({ scope: Scope.REQUEST })export class OrdersService { async getCurrentQueue() { // ❌ Don't cache real-time data const queue = await this.microserviceClient.sendWithContext<QueueData>( this.logger, this.queueService, { cmd: 'get_current_queue' }, { department_id: 'OPERATIONS' }, null, // No cache options - always fetch fresh data );
return queue; }}Cache Strategy Guidelines
Section titled “Cache Strategy Guidelines”✅ GOOD Candidates for Caching
Section titled “✅ GOOD Candidates for Caching”| Data Type | TTL Suggestion | Reason |
|---|---|---|
| User profiles | 10-30 minutes | Rarely change during session |
| Master data (titles, regions, coverage schemes) | 1-24 hours | Very rarely change |
| Configuration data | 1 hour | Static during operation |
| Lookup tables | 30 minutes - 1 hour | Infrequently updated |
| File/Attachment URLs | 5-10 minutes | Valid for short period |
| Computed aggregations (if idempotent) | 5-15 minutes | Expensive to calculate |
❌ BAD Candidates for Caching
Section titled “❌ BAD Candidates for Caching”| Data Type | Why Not Cache |
|---|---|
| Real-time data (queue status, live counts) | Data changes constantly |
| Transactional data (orders, payments) | Must always be current |
| User session data | Already in Redis via JWT |
| Sensitive data (passwords, tokens) | Security risk |
| Large datasets (paginated lists) | Memory overhead |
| POST/PUT/DELETE operations | State-changing operations |
Monitoring Cache Performance
Section titled “Monitoring Cache Performance”Log Messages
Section titled “Log Messages”The caching system provides detailed structured logs:
// Cache Hit (data retrieved from cache){ "level": "info", "message": "[Cache Hit] Retrieved cached data for command 'get_resource_by_id'", "context": { "action": "CACHE_HIT_GET_RESOURCE_BY_ID", "correlation_id": "abc-123", "cache_key": "microservice:get_resource:5f8d0e4e6c1b" }}
// Cache Miss (not in cache, calling microservice){ "level": "debug", "message": "[Cache Miss] No cached data found for command 'get_resource_by_id'", "context": { "action": "CACHE_MISS_GET_RESOURCE_BY_ID", "correlation_id": "abc-123", "cache_key": "microservice:get_resource:5f8d0e4e6c1b" }}
// Cache Set (storing response in cache){ "level": "debug", "message": "[Cache Set] Cached response for command 'get_resource_by_id'", "context": { "action": "CACHE_SET_GET_RESOURCE_BY_ID", "correlation_id": "abc-123", "cache_key": "microservice:get_resource:5f8d0e4e6c1b", "ttl_seconds": 300 }}
// Cache Error (Redis error, but request continues){ "level": "warn", "message": "[Cache Error] Failed to retrieve cache for command 'get_resource_by_id'", "context": { "action": "CACHE_ERROR_GET_RESOURCE_BY_ID", "correlation_id": "abc-123", "error": "Connection timeout" }}Metrics to Monitor
Section titled “Metrics to Monitor”- Cache Hit Rate:
(Cache Hits) / (Cache Hits + Cache Misses) * 100% - Average Response Time: Compare cached vs uncached requests
- Cache Size: Monitor Redis memory usage
- TTL Effectiveness: Check if data expires before reuse
Best Practices
Section titled “Best Practices”1. Choose Appropriate TTL
Section titled “1. Choose Appropriate TTL”// ❌ BAD: TTL too long for frequently changing data{ enabled: true, ttl: 86400 } // 24 hours for user status
// ✅ GOOD: Reasonable TTL based on data volatility{ enabled: true, ttl: 300 } // 5 minutes for user status2. Use Descriptive Key Prefixes
Section titled “2. Use Descriptive Key Prefixes”// ❌ BAD: Generic prefix{ enabled: true, keyPrefix: 'data' }
// ✅ GOOD: Specific prefix for easy debugging{ enabled: true, keyPrefix: 'resource:coverage:schemes' }3. Don’t Cache Null/Empty Results
Section titled “3. Don’t Cache Null/Empty Results”The system automatically skips caching if:
- Result is
null - Result is
undefined
// If microservice returns null, it won't be cachedconst resource = await sendWithContext(..., null, { enabled: true });// null result → not cached → next request will call microservice again ✅4. Handle Cache Errors Gracefully
Section titled “4. Handle Cache Errors Gracefully”Cache errors never break your requests. The system:
- Logs the error
- Continues with microservice call
- Returns the microservice result
// Even if Redis is down, this works:const resource = await sendWithContext(..., { enabled: true });// Redis down → logs warning → calls microservice → returns result ✅5. Cache Read-Only Operations
Section titled “5. Cache Read-Only Operations”// ✅ GOOD: Cache GET operations{ cmd: 'get_resource_by_id' } // Cache this
// ❌ BAD: Don't cache state-changing operations{ cmd: 'update_resource' } // Don't cache this{ cmd: 'create_engagement' } // Don't cache thisCache Invalidation
Section titled “Cache Invalidation”Manual Invalidation (if needed)
Section titled “Manual Invalidation (if needed)”If you need to manually clear cache:
import { Inject } from '@nestjs/common';
import Redis from 'ioredis';
import { AppMicroservice } from '@lib/common/enum/app-microservice.enum';
@Injectable()export class SomeService { constructor( @Inject(AppMicroservice.Redis.name) private readonly redisClient: Redis, ) {}
async invalidateResourceCache(resourceId: string) { // Delete specific cache key pattern const pattern = 'microservice:get_resource:*'; const keys = await this.redisClient.keys(pattern);
if (keys.length > 0) { await this.redisClient.del(...keys); } }}Automatic Expiration
Section titled “Automatic Expiration”All cached data automatically expires after TTL. No manual cleanup needed.
Troubleshooting
Section titled “Troubleshooting”Cache Not Working?
Section titled “Cache Not Working?”Check:
- ✅ Redis is running:
redis-cli ping→ should returnPONG - ✅
cacheOptions.enabledistrue - ✅ Result is not
nullorundefined - ✅ Check logs for cache errors
High Memory Usage?
Section titled “High Memory Usage?”Solutions:
- Reduce TTL values
- Use more specific cache keys (avoid caching large datasets)
- Monitor Redis memory:
redis-cli info memory - Consider cache size limits
Stale Data Issues?
Section titled “Stale Data Issues?”Solutions:
- Reduce TTL for volatile data
- Implement cache invalidation on updates
- Use cache only for truly immutable/slow-changing data
Performance Benchmarks
Section titled “Performance Benchmarks”Example: Fetching Resource Profile
| Scenario | Response Time | Savings |
|---|---|---|
| No Cache (Microservice call) | 45ms | - |
| Cache Hit (Redis) | 3ms | 93% faster |
Benefit: For 1000 requests/minute, caching saves ~42 seconds of cumulative response time.
Summary
Section titled “Summary”When to use caching:
- ✅ Read-only operations
- ✅ Data changes infrequently
- ✅ High read volume
- ✅ Acceptable slight staleness
When NOT to use caching:
- ❌ Real-time data
- ❌ Transactional operations
- ❌ Personalized/user-specific data (unless scoped properly)
- ❌ Large datasets
Remember: Caching is a performance optimization, not a replacement for proper service design.