4.4.3. Emerging Architecture Risks: Containers, Microservices, Serverless
💡 First Principle: Traditional security boundaries assume that applications run as isolated processes on dedicated operating systems with clearly defined network perimeters. Containers, microservices, and serverless functions shatter every one of those assumptions — they share kernels, communicate over ephemeral internal networks, scale to thousands of instances in seconds, and disappear before traditional monitoring can observe them. Security for these architectures requires shifting controls from the infrastructure layer (where you no longer have visibility) to the workload identity and build pipeline layers (where you still do).
Containers vs. virtual machines — the isolation difference that matters:
| Property | Virtual Machine | Container |
|---|---|---|
| Isolation boundary | Hypervisor + separate kernel per VM | Shared host kernel; namespaces + cgroups |
| Escape impact | Hypervisor exploit required (rare, severe) | Kernel exploit affects ALL containers on host |
| Startup time | Minutes | Milliseconds to seconds |
| Attack surface | Hypervisor emulation layer | Host kernel syscall interface |
| Image size | Gigabytes (full OS) | Megabytes (application + dependencies only) |
The shared kernel is the critical security difference. In a VM environment, compromising one VM's guest kernel gives you control of that VM alone — the hypervisor enforces hardware-level isolation. In a container environment, all containers on a host share the same Linux kernel. A kernel vulnerability (or a misconfigured --privileged flag) gives an attacker access to every container on that host and the host itself.
Container-specific attack vectors:
- Privileged containers: Running a container with
--privilegedorhostNetwork: truegrants it full access to the host's devices, network stack, and kernel capabilities — effectively eliminating the container isolation boundary entirely. A compromised privileged container IS the host. - Image supply chain: Container images are pulled from registries (Docker Hub, ECR, GCR). A malicious or outdated base image introduces vulnerabilities at build time that propagate to every deployment. Images tagged
:latestcan change contents between pulls without notice. - Secrets in images: Hardcoded credentials, API keys, or certificates baked into container image layers persist in every layer of the image history — even if "deleted" in a subsequent layer. Image layer inspection recovers them trivially.
- Container escape: Techniques like mounting the Docker socket (
/var/run/docker.sock) inside a container give the container control over all other containers on the host. Kernel exploits (e.g., CVE-2019-5736 in runc) enable direct container-to-host escape.
Container security controls:
| Control | What It Prevents |
|---|---|
| Minimal base images (Alpine, distroless) | Reduces CVE attack surface — fewer packages = fewer vulnerabilities |
| Non-root execution | Limits blast radius of container compromise; prevents trivial privilege escalation |
| Image scanning in CI/CD | Catches known CVEs before deployment; blocks images with critical vulnerabilities |
Immutable image references (digest hashes, not :latest) | Prevents tag hijacking where a registry compromise replaces image contents |
| Read-only root filesystem | Prevents malware from writing persistent files inside the container |
| Pod Security Standards (Kubernetes) | Enforces security policies (no privileged containers, no host network) across the cluster |
| Runtime security monitoring | Detects anomalous syscalls, file access, or network behavior inside running containers |
Microservices security challenges:
Microservices decompose monolithic applications into dozens or hundreds of independently deployable services communicating over the network. This creates security challenges that monoliths do not have:
- East-west traffic explosion: In a monolith, internal function calls are in-process and invisible to the network. In microservices, every service-to-service call is a network request — creating hundreds of network paths that each need authentication, authorization, and encryption. Traditional perimeter firewalls see none of this internal traffic.
- Service identity and authentication: Each microservice needs a verifiable identity. Mutual TLS (mTLS) through a service mesh (Istio, Linkerd) provides cryptographic service-to-service authentication and encrypts all inter-service traffic — replacing implicit trust based on network location with explicit identity verification.
- API gateway as the new perimeter: The API gateway is the single entry point that enforces authentication, rate limiting, input validation, and routing before requests reach internal services. Compromising the API gateway is equivalent to compromising the perimeter firewall in traditional architecture.
- Distributed authorization: Authorization decisions must be made at each service, not just at the gateway. A user authorized to read their own orders must not be able to read other users' orders through an internal service that lacks its own authorization check (BOLA/IDOR at the microservice level).
Serverless (Function-as-a-Service) security:
Serverless functions (AWS Lambda, Azure Functions, Google Cloud Functions) execute in ephemeral containers managed entirely by the cloud provider. The security model shifts dramatically:
| Traditional Control | Serverless Reality |
|---|---|
| OS patching | Provider responsibility — you cannot patch the runtime |
| Long-running agents (EDR, AV) | Impossible — functions live milliseconds to minutes |
| Network segmentation | Limited — functions execute in provider-managed environments |
| Static IP monitoring | Functions may use different IPs per invocation |
| Persistent filesystem monitoring | No persistent filesystem — functions are stateless |
Serverless-specific risks:
- Overprivileged execution roles: Functions assigned broad IAM permissions (e.g.,
AdministratorAccess) give any vulnerability in the function code access to the entire cloud account. Least privilege is critical — each function should have only the permissions it needs for its specific task. - Event injection: Serverless functions are triggered by events (HTTP requests, queue messages, S3 uploads, database changes). Malicious event data can inject commands if the function does not validate and sanitize inputs — the same injection principles apply, but the attack surface is the event source, not a web form.
- Cold start data leakage: Function execution environments may be reused across invocations. Sensitive data left in
/tmpor global variables from a previous invocation can be read by subsequent invocations — potentially from different users or tenants. - Dependency vulnerabilities: Serverless deployment packages include application dependencies. A vulnerable library in the deployment package is exploitable during every function invocation. SCA scanning of serverless packages is essential.
Architectural risk comparison:
The progression from traditional to serverless shifts security responsibility from infrastructure controls you manage (OS, network, agents) to application-layer controls embedded in the build and deployment pipeline (image scanning, IAM policies, input validation, secrets management). The less infrastructure you control, the more your security depends on code, configuration, and identity — which is exactly why DevSecOps and shift-left practices are essential for cloud-native architectures.
⚠️ Exam Trap: The exam tests whether you understand that containers share the host kernel while VMs have independent kernels. A question might describe a vulnerability in the host Linux kernel and ask which workloads are affected: the answer is ALL containers on that host, but only VMs running that specific kernel version. Container isolation is process-level (namespaces/cgroups); VM isolation is hardware-level (hypervisor). Running a container with --privileged effectively removes the container boundary entirely.
Reflection Question: An organization runs 200 microservices in Kubernetes. A developer proposes that all services should trust each other's requests because they are "inside the cluster." Using zero trust principles and the concept of east-west traffic, explain why this is dangerous and what controls should be implemented instead.