Copyright (c) 2025 MindMesh Academy. All rights reserved. This content is proprietary and may not be reproduced or distributed without permission.

4.3.4. Sample Questions - Domain 4: Cost Control

Question 1:

A company operates a critical batch processing workload that runs daily for a few hours. The workload is fault-tolerant and can tolerate interruptions. The architect needs to design a solution that minimizes compute costs while ensuring the workload completes successfully. Which "EC2" purchasing option is MOST suitable for this scenario?

A) "On-Demand Instances" B) "Reserved Instances (RIs)" C) "Spot Instances" D) "Dedicated Hosts"

Correct Answer: C
Explanation:
  • A) "On-Demand Instances": "On-Demand" instances are billed by the second (or hour) with no long-term commitment. While flexible, they are the most expensive purchasing option and do not offer significant cost savings for a fault-tolerant workload.
  • B) "Reserved Instances (RIs)": "RIs" offer significant discounts for committing to a consistent amount of compute capacity for a 1- or 3-year term. They are best for steady-state, predictable workloads, not for workloads that are designed to tolerate interruptions and can fluctuate.
  • C) "Spot Instances": "Spot Instances" allow you to bid for unused "EC2" capacity, often at significant discounts (up to 90%) compared to "On-Demand" prices. They are ideal for fault-tolerant, flexible, and interruption-tolerant workloads, such as batch processing, as instances can be interrupted by AWS with a 2-minute warning if Spot capacity is no longer available. This directly addresses the requirement for minimizing compute costs for a fault-tolerant workload. This aligns with the First Principle of Cost-Optimized Compute for interruptible workloads.
  • D) "Dedicated Hosts": "Dedicated Hosts" provide physical "EC2" servers dedicated for your use, offering licensing flexibility for existing server-bound software. They are the most expensive option and offer no cost savings for a general fault-tolerant batch workload.

Question 2:

A company uses "Amazon S3" to store large amounts of log data. Most of the data is accessed frequently for the first 30 days, then accessed infrequently for the next 60 days, and finally, rarely accessed but needs to be retained for compliance for 7 years. The architect needs to design a solution to automatically manage the data lifecycle and optimize storage costs. Which "S3" feature should be configured?

A) "S3 Intelligent-Tiering" B) "S3 Lifecycle Policies" C) "S3 Object Lock" D) "S3 Glacier Deep Archive"

Correct Answer: B
Explanation:
  • A) "S3 Intelligent-Tiering": "S3 Intelligent-Tiering" automatically moves objects between frequent and infrequent access tiers based on changing access patterns. While it automates tiering, it doesn't provide the ability to automatically move data to "Glacier"/"Glacier Deep Archive" after a specific period or expire it after 7 years according to a fixed, predetermined schedule as defined in the requirements. It's for unpredictable access.
  • B) "S3 Lifecycle Policies": "S3 Lifecycle Policies" allow you to define rules to automatically transition objects to different "S3 storage classes" (e.g., "S3 Standard" to "S3 Standard-IA", then to "Glacier", then to "Glacier Deep Archive") or expire them after a specified period. This perfectly matches the requirement for a predetermined, automated lifecycle management across multiple access patterns and a fixed retention period for compliance. This aligns with the First Principle of Automated Data Lifecycle Management and Cost-Efficient Storage.
  • C) "S3 Object Lock": "S3 Object Lock" provides Write Once Read Many ("WORM") capability, preventing objects from being deleted or overwritten for a fixed amount of time or indefinitely. It's for immutability and compliance, not for automated tiering or general lifecycle management.
  • D) "S3 Glacier Deep Archive": "S3 Glacier Deep Archive" is a storage class within "S3", used for the lowest-cost archive storage. It's a destination for lifecycle policies, not the mechanism for managing the entire lifecycle automatically across multiple tiers.

Question 3:

An application running on "EC2 instances" in a private subnet frequently needs to download large datasets from "Amazon S3". The current architecture routes this traffic through a "NAT Gateway", incurring significant data transfer costs. The architect needs to reduce these costs and keep the traffic private, without traversing the public internet. Which AWS service should be implemented?

A) "VPC Peering" B) "AWS Direct Connect" C) "S3 Gateway Endpoint" D) "AWS PrivateLink"

Correct Answer: C
Explanation:
  • A) "VPC Peering": "VPC Peering" connects two "VPCs", allowing instances in one "VPC" to communicate directly with instances in another. It does not provide private access to AWS services like "S3" or reduce "NAT Gateway" costs.
  • B) "AWS Direct Connect": "Direct Connect" provides a dedicated network connection from on-premises to AWS. It is for hybrid cloud connectivity, not for private access to AWS services from within a "VPC".
  • C) "S3 Gateway Endpoint": A "Gateway Endpoint for S3" (and "DynamoDB") allows instances in your "VPC" to access "Amazon S3" using private IP addresses, without requiring an "Internet Gateway", "NAT Gateway", or "VPN" connection. Traffic between your "VPC" and "S3" is routed privately over the Amazon network and is free of charge. This directly addresses the requirements for reducing "NAT Gateway" costs and keeping traffic private for "S3" access. This aligns with the First Principle of Private Service Access and Cost-Efficient Data Transfer.
  • D) "AWS PrivateLink": "AWS PrivateLink" allows you to privately access AWS services (like "Systems Manager", "Kinesis", other non-"S3"/"DynamoDB" services) as well as services hosted by other AWS accounts or "SaaS" partners within your "VPC" privately. While it provides private connectivity, an "S3 Gateway Endpoint" is the specific, free, and more direct solution for "S3" traffic within a "VPC". "PrivateLink" for "S3" would be an "Interface Endpoint", which does incur cost (though less than "NAT Gateway").

Question 4:

A startup is launching a new mobile backend that is expected to have highly unpredictable and spiky traffic patterns. They want a compute solution that scales automatically from zero to meet demand and only incurs costs when the code is actually running, minimizing expenses during idle periods. Which AWS service is the MOST cost-efficient for this type of workload?

A) "Amazon EC2 with Auto Scaling" B) "Amazon ECS on EC2" C) "AWS Lambda" D) "AWS Fargate"

Correct Answer: C
Explanation:
  • A) "Amazon EC2 with Auto Scaling": While "EC2 Auto Scaling" can scale to meet demand, "EC2 instances" still incur costs even when idle (if they are running), and scaling from zero takes more time and management than serverless functions.
  • B) "Amazon ECS on EC2": "ECS" can orchestrate containers on "EC2 instances". Similar to "EC2", you still manage and pay for the underlying "EC2 instances", which will incur costs even if containers are not running if the instances are idle.
  • C) "AWS Lambda": "AWS Lambda" is an event-driven, serverless compute service that runs code in response to events. It automatically scales from zero to meet demand and you are billed only for the compute time consumed (per invocation and duration/"GB-second"), meaning there is no cost when your code is not running. This perfectly matches the requirements for automatic scaling from zero, pay-per-use, and cost efficiency for spiky, unpredictable workloads. This aligns with the First Principle of Serverless Cost Optimization and Dynamic Scalability.
  • D) "AWS Fargate": "AWS Fargate" is a serverless compute engine for containers. While it removes the need to manage "EC2 instances", you are still billed for the "vCPU" and memory allocated to your running containers, even if they are not actively processing requests. While more efficient than "EC2", "Lambda" is generally more cost-efficient for workloads that scale truly to zero and have very short, spiky execution patterns.

Question 5:

A company needs to analyze its historical AWS spending patterns, identify specific cost drivers across different services and linked accounts, and forecast future costs. They also want to identify opportunities for cost optimization, such as underutilized resources or potential "Savings Plans" recommendations. Which AWS service provides these capabilities?

A) "AWS Budgets" B) "AWS Cost Explorer" C) "AWS Cost and Usage Report (CUR)" D) "AWS Trusted Advisor"

Correct Answer: B
Explanation:
  • A) "AWS Budgets": "AWS Budgets" allows you to set custom budgets and receive alerts. It's for cost control and alerts, not for detailed historical analysis, identification of cost drivers, or forecasting.
  • B) "AWS Cost Explorer": "AWS Cost Explorer" is a free service that allows you to visualize, understand, and manage your AWS costs and usage over time. It provides customizable reports and forecasts, helps identify trends, forecasts future costs, and offers optimization recommendations (e.g., for "RIs"/"Savings Plans", right-sizing). This directly addresses all the requirements for analyzing historical spend, identifying cost drivers, and forecasting. This aligns with the First Principle of Proactive Cost Analysis and Data-Driven Optimization.
  • C) "AWS Cost and Usage Report (CUR)": The "CUR" provides the most comprehensive dataset about your AWS costs and usage. While it contains the raw data for such analysis, it's a raw data feed that requires external tools (like "Athena", "Redshift", or a third-party "BI" tool) to query and visualize effectively. "Cost Explorer" is the AWS-managed service that provides the immediate visualization and analysis capabilities.
  • D) "AWS Trusted Advisor": "AWS Trusted Advisor" provides recommendations to follow AWS best practices across cost optimization, performance, security, fault tolerance, and service limits. While it offers cost optimization recommendations, it doesn't provide the detailed historical analysis, forecasting, or granular breakdown capabilities of "Cost Explorer".