Copyright (c) 2026 MindMesh Academy. All rights reserved. This content is proprietary and may not be reproduced or distributed without permission.

3.1.2. Application-Level Caching: ElastiCache and CloudFront

šŸ’” First Principle: A cache intercepts expensive operations and serves the result from fast memory instead. The economics are compelling: a database query that takes 50ms and consumes database CPU can become a cache hit that takes 0.5ms and costs nothing. Caching is frequently the highest-ROI reliability improvement available.

Without caching, every user request hits the database. Under heavy load, the database becomes the bottleneck — it queues requests, query latency grows, and eventually the whole application slows or fails. Caching breaks this dependency for frequently accessed, slowly changing data.

Amazon ElastiCache is a managed in-memory caching service supporting two engines:

FeatureRedisMemcached
Data structuresRich (lists, sets, hashes, sorted sets, streams)Simple strings only
Persistenceāœ… Yes (RDB snapshots, AOF logging)āŒ No
Replicationāœ… Yes (read replicas)āŒ No
Multi-AZ failoverāœ… Yes (automatic)āŒ No
Pub/Sub messagingāœ… YesāŒ No
Horizontal scalingāœ… Cluster mode with shardingāœ… Multi-node
Use whenSessions, leaderboards, pub/sub, complex dataSimple caching, multi-threaded scaling
Caching Strategies:
StrategyHow It WorksBest For
Cache-Aside (Lazy Loading)Application checks cache first; on miss, reads DB and populates cacheRead-heavy workloads; only caches what's requested
Write-ThroughApplication writes to cache AND DB on every writeData must always be fresh in cache; higher write cost
Write-Behind (Write-Back)Write to cache immediately; async flush to DBHigh write throughput; risk of data loss if cache fails
Read-ThroughCache library fetches from DB automatically on missSimplifies application code

TTL (Time to Live): Every cached item should have a TTL — the maximum time it stays in cache before expiring. Without TTL, stale data accumulates. With TTL too short, your cache hit rate drops and you lose the performance benefit. Setting TTL requires understanding how often your data changes and how stale is acceptable.

CloudFront for Static and Dynamic Content:

CloudFront caches content at AWS edge locations globally. For caching purposes, CloudFront is the right choice when:

  • Content is served to geographically distributed users
  • Objects are static or change infrequently (images, CSS, JS, API responses with caching headers)
  • You want to absorb traffic spikes at the edge rather than hitting your origin

ElastiCache is application-layer caching (your app code controls what's cached). CloudFront is network-layer caching (HTTP responses cached at edge).

āš ļø Exam Trap: ElastiCache Redis is the exam's default answer for session storage because it supports persistence and replication. Memcached doesn't persist data — if a Memcached node fails, all cached data is lost and every subsequent request hits the database (cache stampede risk). For production session stores, Redis is the correct choice.

Reflection Question: An e-commerce site uses ElastiCache Memcached for session storage. During a node failure, 10,000 users are suddenly logged out and all hit the database simultaneously, causing an outage. What two architectural changes address this problem?

Alvin Varughese
Written byAlvin Varughese
Founder•15 professional certifications