5.3. AI Governance and Responsible AI
💡 First Principle: AI governance operationalizes the organization's AI policies — it creates the audit trail, accountability structures, and monitoring mechanisms that allow an organization to demonstrate to regulators, customers, and internal stakeholders that its AI systems are operating as intended and within defined boundaries.
Governance isn't a feature you add to a system; it's an architectural property that must be designed in from the start.
| Governance Pillar | What It Requires | AWS Implementation |
|---|---|---|
| Audit trail | Every FM invocation logged with prompt + response | Model Invocation Logs → S3/CloudWatch |
| Access control | Who can invoke which models and prompts | IAM policies, Bedrock resource policies |
| Content guardrails | Input/output filtering enforced consistently | Bedrock Guardrails at org level |
| Human oversight | High-stakes decisions reviewed before action | Lambda human-in-the-loop gate |
| Bias monitoring | Regular evaluation for discriminatory outputs | Bedrock Model Evaluations, SageMaker Clarify |
An FM application without governance produces outputs that can't be audited, decisions that can't be explained, and behavior that can't be proven compliant — regardless of how technically sophisticated the application is.
| Governance Pillar | What It Requires | AWS Implementation |
|---|---|---|
| Audit trail | Every FM invocation logged with prompt + response | Model Invocation Logs → S3/CloudWatch |
| Access control | Who can invoke which models + prompts | IAM policies, Bedrock resource policies |
| Content guardrails | Input/output filtering enforced consistently | Bedrock Guardrails applied at org level |
| Human oversight | High-stakes decisions reviewed before action | Lambda human-in-the-loop gate |
| Bias monitoring | Regular evaluation for discriminatory outputs | Bedrock Model Evaluations, SageMaker Clarify |
⚠️ Common Misconception: A model card documents the foundation model and only needs to be created by the model provider (Anthropic, Amazon, Meta). When deploying fine-tuned or customized models, the deploying organization must maintain its own model cards documenting intended use, limitations, evaluation results, and responsible AI considerations for that specific deployment.