4.2.7. Generative AI Application Security
First Principle: Generative AI applications introduce a fundamentally new attack surface — the model itself becomes a target through prompt injection, data poisoning, and output manipulation. Traditional application security controls don't address these AI-specific threats.
This is entirely new content for the SCS-C03 — no C02 equivalent exists.
Amazon Bedrock provides secure access to foundation models:
- Guardrails: Define content filters, denied topics, and word filters that apply to model inputs and outputs
- Model access control: IAM policies control which models each role can invoke
- Data privacy: Customer data is not used to train models; data doesn't leave the Region
- VPC connectivity: Use VPC endpoints to access Bedrock without internet exposure
GenAI OWASP Top 10 for LLM Applications (exam-relevant):
| Risk | Description | AWS Mitigation |
|---|---|---|
| Prompt injection | Attacker manipulates model behavior through crafted inputs | Bedrock Guardrails, input validation |
| Insecure output handling | Application trusts model output without validation | Output sanitization, Guardrails content filters |
| Training data poisoning | Malicious data influences model behavior | Data lineage, SageMaker Feature Store |
| Denial of service | Resource exhaustion through expensive model invocations | API Gateway throttling, IAM rate limits |
| Sensitive data disclosure | Model reveals training data or PII in responses | Bedrock Guardrails, PII filtering |
| Excessive agency | Model given overly broad tool/function access | Least-privilege tool definitions, IAM scoping |
Securing AI/ML Workloads on AWS:
- SageMaker AI: Network isolation (VPC), encryption at rest and in transit, IAM roles per notebook/training job/endpoint
- Bedrock: Guardrails for content safety, model access governance, VPC endpoints for private access
- Data protection: Encrypt training data in S3, use KMS for model artifacts, enable CloudTrail for audit
⚠️ Exam Trap: Bedrock Guardrails is the primary control for prompt injection and content safety. If a question describes preventing a model from generating harmful content or blocking prompt injection attacks, Guardrails is the answer — not WAF or network controls.
Scenario: A company deploys a customer-facing chatbot using Bedrock. During testing, a researcher demonstrates that prompt injection can make the chatbot reveal the system prompt and generate inappropriate content. You configure Bedrock Guardrails with denied topics, content filters, and input validation patterns to block these attacks.
Reflection Question: How do AI-specific security controls (Guardrails, content filters) differ from traditional application security controls (WAF, input validation), and why are both needed?