Copyright (c) 2026 MindMesh Academy. All rights reserved. This content is proprietary and may not be reproduced or distributed without permission.

7.2.2. Code Review, SAST, DAST, and Breach Simulations

💡 First Principle: Different testing methods reveal different vulnerability classes. No single method provides comprehensive coverage — the most thorough security assurance programs combine SAST (source code), DAST (running application), SCA (third-party components), and manual review, each targeting different risk categories at appropriate SDLC stages.

Software security testing methods:
MethodTestsHowFindsMisses
SASTSource codeStatic analysis without executionInjection patterns, hardcoded secrets, insecure code patternsRuntime issues; logic flaws needing execution context
DASTRunning applicationSends inputs from outsideInput validation failures, auth issues, runtime config errorsRequires running environment; less code coverage than SAST
IASTRunning applicationAgent inside app observes executionCombines SAST precision with DAST runtime contextPerformance overhead; requires instrumented runtime
SCAThird-party componentsCompares component versions to CVE databasesKnown vulnerabilities in dependencies (e.g., Log4Shell)Custom code vulnerabilities; requires maintained component database
Manual code reviewSource codeExpert human analysisComplex logic flaws; design issues; business logicTime-intensive; cognitive fatigue; requires deep expertise
FuzzingInput handlingRandom/malformed input injectionMemory corruption; crashes; input handling bugsLogic flaws that don't crash
Shift-left security — cost to fix by SDLC phase:
PhaseRelative Fix CostTesting Activities
RequirementsThreat modeling; security requirements definition
DesignArchitecture review; STRIDE analysis
Implementation10×SAST; peer code review; SCA
Testing20×DAST; IAST; integration security testing
Deployment50×Configuration scanning; pentest
Operations100×Vulnerability scanning; incident response

The foundational argument for DevSecOps: security defects found in requirements cost 1× to fix; the same defect in production costs 100×.

Security metrics categories:
CategoryExample MetricsPurpose
Coverage% assets scanned, % systems with EDR deployedProgram completeness
Posture# critical vulns open beyond SLA, patch compliance %Current risk state
SpeedMTTD (mean time to detect), MTTR (mean time to remediate)Operational effectiveness
TrendMonthly patch compliance trend, phishing click rate over 12 monthsImprovement direction
Compliance% controls passing audit, policy acknowledgment rateRegulatory readiness

Metrics must be translated into business language for management: "43 critical vulnerabilities beyond SLA" → "These vulnerabilities could expose customer financial data; regulatory fines of up to $X; remediation requires Y resources over Z weeks."

⚠️ Exam Trap: SAST and DAST test complementary things — neither substitutes for the other. SAST finds injection flaws in source code before deployment; DAST finds authentication bypasses in a running application. SCA finds vulnerabilities in third-party libraries that neither SAST nor DAST would catch if the library's source isn't in scope. A program using only one method has systematic blind spots.

Reflection Question: A development team uses automated DAST scans in their CI/CD pipeline and considers this sufficient for application security testing. A security architect identifies critical gaps. What specific vulnerability categories would DAST-only testing miss, what complementary methods should be added, and at which SDLC stages should each be integrated?

Alvin Varughese
Written byAlvin Varughese
Founder15 professional certifications