4.2.2. Event-Driven Automation: Lambda and S3 Event Notifications
š” First Principle: Polling is expensive and slow. Instead of checking "has anything changed?" every 60 seconds, event-driven architecture inverts this: resources announce when they change, and automation responds immediately. This makes systems faster, cheaper, and more responsive than any polling approach.
AWS services emit events when things happen: an S3 object is uploaded, a DynamoDB stream record is written, an EC2 instance changes state. By connecting these events to Lambda functions, you create automation that responds in near real-time without any infrastructure to manage.
S3 Event Notifications:
S3 can publish events when objects are created, deleted, tagged, restored from Glacier, or replicated. Destinations:
| Destination | Use Case |
|---|---|
| SNS | Fan-out notification to multiple subscribers |
| SQS | Queued processing (handles backpressure for high-volume events) |
| Lambda | Direct invocation for immediate processing |
| EventBridge | Advanced routing with filtering and cross-account delivery |
Example pipeline: Images uploaded to S3 ā S3 event ā Lambda ā generate thumbnail ā store in different S3 prefix.
EventBridge as the Central Event Bus:
EventBridge (covered in Phase 2) is the preferred integration point for operational automation because it supports:
- Pattern matching on event content
- Multiple simultaneous targets
- Cross-account and cross-region routing
- Scheduled rules (cron-based automation)
- Dead-letter queues for failed deliveries
AWS Step Functions orchestrates multi-step workflows where the overall process must be tracked, steps may have human approval requirements, or individual steps can fail and need retry logic:
Lambda Limitations for Automation:
| Constraint | Value | Implication |
|---|---|---|
| Max execution timeout | 15 minutes | Long operations need Step Functions or SSM Automation |
| Max memory | 10,240 MB | Memory-intensive operations may need EC2 or Fargate |
| Max package size | 250 MB (unzipped) | Large dependencies need Lambda layers or container images |
| Concurrent executions | 1,000 (default, adjustable) | High-volume event processing needs throttling consideration |
Common Event-Driven Automation Patterns:
| Trigger | Event | Lambda Action |
|---|---|---|
| S3 upload | s3:ObjectCreated | Parse, transform, validate, route |
| EC2 state change | EC2 Instance State-change | Update DNS, notify, deregister from monitoring |
| Config rule violation | Config Rules Compliance Change | Auto-remediate, notify, create OpsItem |
| CloudTrail API call | AWS API Call via CloudTrail | Detect suspicious activity, block access, alert |
| DynamoDB stream | Record insert/modify/delete | Replicate to Elasticsearch, audit, trigger downstream |
ā ļø Exam Trap: S3 event notifications and EventBridge are two separate systems for S3 events. S3 Event Notifications deliver directly to SNS/SQS/Lambda with no filtering beyond event type. EventBridge receives all S3 events (when enabled) and supports rich filtering by prefix, suffix, event type, and metadata ā plus cross-account delivery. For complex filtering or cross-account routing, enable S3 ā EventBridge integration and use EventBridge rules.
Reflection Question: Every time a new object is uploaded to an S3 bucket, you need to: (1) immediately trigger a Lambda to validate the file format, (2) send a notification to an SQS queue for async processing, and (3) route events matching *.csv to a different Lambda than events matching *.json. Which approach ā S3 Event Notifications or EventBridge ā best handles requirement 3, and why?