2.1.1.6. Comparative Table: Compute Service Selection Criteria
š” First Principle: Informed architectural decisions stem from a clear understanding of each service's inherent strengths, limitations, and operational model, enabling optimal alignment with workload requirements.
Scenario: A startup needs to deploy a new application that uses Docker containers. They are a small team with limited operational expertise and want to minimize infrastructure management while ensuring high scalability for their containerized workloads.
Selecting the appropriate compute service is a critical architectural decision. This table summarizes key criteria to guide your choice, emphasizing the trade-offs involved across various dimensions.
Feature | "Amazon EC2" | "Amazon ECS" /"EKS" (EC2 launch type ) | "AWS Fargate" (ECS /"EKS" ) | "AWS Lambda" | "AWS Batch" |
---|---|---|---|---|---|
Control/Mgmt. | High (OS , software) / High | Medium (Container orchestration) / Medium | Low (Containers only) / Low | Very Low (Code only) / Very Low | Low (Batch jobs) / Low |
Scalability | "Auto Scaling Groups" / Elastic | "ECS" /"EKS" Auto Scaling / Elastic | Automatic / Highly Elastic | Automatic / Extremely Elastic | Automatic / Highly Elastic |
Cost Model | Per instance-hour (static) | Per instance-hour + container overhead | Per vCPU/"GB-second" (serverless) | Per invocation/"GB-second" (serverless) | Per vCPU/"GB-second" (jobs) |
Use Case | Legacy apps, custom OS, long-running | Microservices, containerized apps, stateful | Stateless containers, reduced ops overhead | Event-driven, APIs, data processing | HPC, large-scale parallel processing |
Run Duration | Long-running | Long-running | Long-running | Up to 15 min per invocation | Hours to days |
Packaging | "AMI" , custom install | "Docker Container Image" | "Docker Container Image" | Code/Zip file (runtime-specific) | "Docker Container Image" |
Cost Optimization | "RIs" , "Savings Plans" , "Spot Instances" , Right-sizing | "RIs" , "Savings Plans" , "Spot Instances" , Right-sizing | Pay-per-use, optimize vCPU/GB | Pay-per-use, optimize memory/duration | Pay-per-use, "Spot instances" |
Operational Effort | High (OS patching, scaling, patching) | Medium (Cluster management, scaling) | Low (No server management) | Very Low (No server management) | Low (Job management) |
Visual: Compute Service Comparison Matrix (Simplified)
Loading diagram...
ā ļø Common Pitfall: Analyzing cost based only on the per-hour price. The total cost of ownership ("TCO"
) must include the operational effort (staff time) required to manage the solution. A "cheaper" "EC2 instance"
may be more expensive overall than a managed "Fargate"
or "Lambda"
solution due to management overhead.
Key Trade-Offs:
- Cost Model vs. Workload Pattern: A per-hour model (
"EC2"
) is cost-effective for steady-state workloads. A pay-per-use model ("Lambda"
/"Fargate"
) is cost-effective for spiky or intermittent workloads.
Reflection Question: Based on the table, why would "AWS Fargate"
(with either "ECS"
or "EKS"
) be the optimal choice for this startup's needs compared to running containers on "EC2 instances"
, considering their limited operational expertise and need for high scalability?