4.2. Enterprise Integration Architectures
💡 First Principle: GenAI components are not standalone applications — they must integrate into existing enterprise ecosystems: legacy systems, event buses, deployment pipelines, and API management layers. The integration architecture determines whether GenAI capabilities are discoverable, governable, and operable at enterprise scale.
Domain 2 specifically tests Skill 2.3 (enterprise integration architecture) and Skill 2.5 (CI/CD and GenAI gateway). These are practical DevOps and architecture skills that sit alongside the AI-specific knowledge — and candidates with strong AWS architecture backgrounds often score better here than on the pure GenAI sections.
| Integration Pattern | How It Works | Best For | Key AWS Service |
|---|---|---|---|
| Synchronous API | Client waits for FM response | Real-time chat, Q&A | API Gateway → Lambda → Bedrock |
| Async queue | Request queued, result polled or pushed | Long-context generation, batch processing | SQS → Lambda → Bedrock → SNS |
| Event-driven | FM invoked on trigger | Document processing, RAG ingestion | S3 Event → Lambda → Bedrock |
| GenAI Gateway | Centralized proxy for all FM calls | Enterprise multi-team governance | API Gateway + Lambda routing layer |
| Streaming | Tokens returned as generated | Improved perceived latency | API Gateway WebSocket or Lambda streaming |
⚠️ Common Misconception: A GenAI gateway is just an API Gateway with Bedrock behind it. A production GenAI gateway is a full governance layer: centralized authentication, usage quotas per team, cost attribution tags, model routing logic, prompt logging, guardrails enforcement, and observability — all in a single centralized service that every application team routes through.