Copyright (c) 2026 MindMesh Academy. All rights reserved. This content is proprietary and may not be reproduced or distributed without permission.

3.1.4.1. Blue-Green, Canary, Rolling, and Slot-Based Deployments

3.1.4.1. Blue-Green, Canary, Rolling, and Slot-Based Deployments

Deployment strategies control how new versions replace old ones in production. Blue-Green maintains two identical environments — one active (blue), one idle (green). Deploy to green, validate, swap traffic. Rollback is instant: swap back. Canary releases new versions to a small percentage of traffic first (5%), monitoring for errors before expanding. If metrics degrade, route all traffic back to the stable version. Azure App Service deployment slots implement Blue-Green natively: deploy to staging slot, validate with health checks, swap VIP. Auto-rollback using Application Insights monitors error rates post-swap and reverts automatically if thresholds breach, reducing MTTR to ~30 seconds without human intervention. Rolling deployments update instances sequentially — suitable for stateless services behind load balancers. A/B testing extends Canary by routing specific user segments for data-driven feature validation.

Blue-Green deployment mechanics in Azure App Service use deployment slots. The production slot serves live traffic while the staging slot receives the new version. After validation (smoke tests, health checks), az webapp deployment slot swap atomically switches the VIP addresses. The old production version is now in staging, available for instant rollback.

Canary deployments use Azure Traffic Manager or Azure Front Door to split traffic by percentage. A 5% canary sends 1-in-20 requests to the new version while monitoring error rates, latency, and business metrics. Progressive rollout: 5% → 10% → 25% → 50% → 100%, with automatic rollback if any metric breaches thresholds.

Rolling deployments update instances behind a load balancer one at a time (or in batches). Kubernetes rolling updates are the default strategy: maxUnavailable: 25% and maxSurge: 25% ensure 75% capacity during the rollout. Health checks prevent bad pods from receiving traffic.

Deployment rings segment users by risk tolerance. Ring 0 (internal team) gets new features immediately, Ring 1 (early adopters who opted in) gets them next, Ring 2 (general population) gets them last. Unlike percentage-based canary, rings target specific populations, enabling targeted feedback collection from known user segments.

Immutable deployments — where each release creates entirely new infrastructure — eliminate configuration drift but require robust IaC and longer deployment times. Azure Container Apps and serverless platforms naturally support this pattern since each revision creates new container instances.

Traffic routing patterns for canary deployments use weighted backend pools. Azure Front Door and Application Gateway both support percentage-based routing: 95% to the stable backend, 5% to the canary. Health probes on the canary backend detect failures independently — if the canary becomes unhealthy, traffic automatically routes to the stable backend without manual intervention.

Progressive delivery automation combines canary routing with automated analysis. Tools like Flagger (for Kubernetes) automatically promote or roll back canaries based on Prometheus metrics. Define success criteria (error rate < 1%, P99 latency < 200ms), and the tool manages traffic shifting through configurable steps with configurable evaluation intervals.

Progressive exposure is the unifying principle across deployment strategies: limit the blast radius of any change. Blue-Green limits exposure by maintaining a rollback environment. Canary limits exposure by starting with a small traffic percentage. Ring-based deployment limits exposure by targeting internal users first. Azure Traffic Manager and Azure Front Door enable weighted routing across regions, supporting global canary patterns where 5% of traffic routes to the new version globally rather than region-by-region. Deployment slot warmup ensures the staging slot is fully initialized (JIT compilation, cache warming) before the swap, preventing cold-start performance degradation. Health check endpoints should verify not just the application but also its dependencies — a slot that returns 200 while its database connection is broken will pass swap validation incorrectly.

Rollback planning must account for data migrations. A blue-green swap reverting the application doesn't revert database schema changes. Use expand-contract migration patterns: add new columns in v2, stop using old columns in v3, remove old columns in v4. Each schema change is backward-compatible with both the current and previous application version.

Traffic Manager DNS-based routing enables global blue-green deployments across Azure regions. By changing traffic weights at the DNS level, teams can shift global traffic from one region set to another. This extends slot-based deployment patterns to multi-region architectures where single-region slots are insufficient.

Alvin Varughese
Written byAlvin Varughese
Founder15 professional certifications