7.2.2. Code Review, SAST, DAST, and Breach Simulations
💡 First Principle: Different testing methods reveal different vulnerability classes. No single method provides comprehensive coverage — the most thorough security assurance programs combine SAST (source code), DAST (running application), SCA (third-party components), and manual review, each targeting different risk categories at appropriate SDLC stages.
Software security testing methods:
| Method | Tests | How | Finds | Misses |
|---|---|---|---|---|
| SAST | Source code | Static analysis without execution | Injection patterns, hardcoded secrets, insecure code patterns | Runtime issues; logic flaws needing execution context |
| DAST | Running application | Sends inputs from outside | Input validation failures, auth issues, runtime config errors | Requires running environment; less code coverage than SAST |
| IAST | Running application | Agent inside app observes execution | Combines SAST precision with DAST runtime context | Performance overhead; requires instrumented runtime |
| SCA | Third-party components | Compares component versions to CVE databases | Known vulnerabilities in dependencies (e.g., Log4Shell) | Custom code vulnerabilities; requires maintained component database |
| Manual code review | Source code | Expert human analysis | Complex logic flaws; design issues; business logic | Time-intensive; cognitive fatigue; requires deep expertise |
| Fuzzing | Input handling | Random/malformed input injection | Memory corruption; crashes; input handling bugs | Logic flaws that don't crash |
Shift-left security — cost to fix by SDLC phase:
| Phase | Relative Fix Cost | Testing Activities |
|---|---|---|
| Requirements | 1× | Threat modeling; security requirements definition |
| Design | 5× | Architecture review; STRIDE analysis |
| Implementation | 10× | SAST; peer code review; SCA |
| Testing | 20× | DAST; IAST; integration security testing |
| Deployment | 50× | Configuration scanning; pentest |
| Operations | 100× | Vulnerability scanning; incident response |
The foundational argument for DevSecOps: security defects found in requirements cost 1× to fix; the same defect in production costs 100×.
Security metrics categories:
| Category | Example Metrics | Purpose |
|---|---|---|
| Coverage | % assets scanned, % systems with EDR deployed | Program completeness |
| Posture | # critical vulns open beyond SLA, patch compliance % | Current risk state |
| Speed | MTTD (mean time to detect), MTTR (mean time to remediate) | Operational effectiveness |
| Trend | Monthly patch compliance trend, phishing click rate over 12 months | Improvement direction |
| Compliance | % controls passing audit, policy acknowledgment rate | Regulatory readiness |
Metrics must be translated into business language for management: "43 critical vulnerabilities beyond SLA" → "These vulnerabilities could expose customer financial data; regulatory fines of up to $X; remediation requires Y resources over Z weeks."
⚠️ Exam Trap: SAST and DAST test complementary things — neither substitutes for the other. SAST finds injection flaws in source code before deployment; DAST finds authentication bypasses in a running application. SCA finds vulnerabilities in third-party libraries that neither SAST nor DAST would catch if the library's source isn't in scope. A program using only one method has systematic blind spots.
Reflection Question: A development team uses automated DAST scans in their CI/CD pipeline and considers this sufficient for application security testing. A security architect identifies critical gaps. What specific vulnerability categories would DAST-only testing miss, what complementary methods should be added, and at which SDLC stages should each be integrated?