8.2.2. IDS/IPS and Continuous Monitoring
💡 First Principle: Operational security (OPSEC) is the discipline of protecting sensitive information from adversaries who might use it to gain advantage. Originally a military concept (protecting operational plans), in information security it encompasses the practices that prevent attackers from gathering intelligence that enables more effective attacks — limiting reconnaissance, protecting sensitive operational details, and managing information exposure.
Configuration and change management: Every unauthorized change to a production system is a potential security incident — either an attacker modifying the environment or a well-intentioned change creating an unplanned vulnerability. Configuration management establishes a baseline; change management controls how deviations from baseline are authorized.
- Configuration baseline: The approved, secure configuration for each system type (OS hardening, application settings, network device config). Deviations detected via configuration monitoring tools (SCCM, Chef, Ansible, CIS benchmarks)
- Change control process: Proposed changes are submitted, reviewed for security impact, tested in non-production, approved by Change Advisory Board (CAB), and implemented with rollback procedures
- Emergency change process: Faster path for urgent changes (security patches for actively exploited vulns); still requires documentation and post-implementation review
Patch management: Unpatched systems are the most common initial access vector for ransomware and other attacks. Effective patch management requires:
- Asset inventory (you can't patch what you don't know about)
- Vulnerability scanning to identify missing patches
- Prioritization (critical/actively exploited patches first)
- Testing before production deployment (patches can break things)
- Deployment within defined SLAs (e.g., critical patches within 15 days)
- Verification scanning after deployment
Least privilege and need-to-know in operations:
- Operators should have minimum permissions to perform their assigned tasks
- Production access should be limited; changes deployed through automated pipelines where possible
- Service accounts should have specific, minimal permissions — not domain admin
- Privileged Access Workstations (PAWs) for administrative tasks: dedicated, hardened devices not used for browsing or email
Network security monitoring: Continuous monitoring of network traffic for anomalies and known attack signatures. Key data sources: NetFlow (connection metadata — who talked to whom, when, how much data), full packet capture (content — expensive but forensically complete), DNS logs, proxy logs. SIEM correlation of these sources creates detection coverage.
Data Loss Prevention (DLP) in operations: Operational DLP monitors data movement: email attachments containing credit card numbers, USB drives with large file copies, uploads to personal cloud storage. DLP policies must be tuned — over-blocking creates business disruption; under-blocking allows exfiltration.
⚠️ Exam Trap: Change management and incident response interact at emergency changes. When a zero-day is actively exploited and a vendor patch is available, the security team may need to invoke the emergency change process to deploy the patch within hours rather than the standard 30-day change cycle. The CAB must understand that the risk of NOT patching (active exploitation) exceeds the risk of deploying an emergency change (potential breaking change). Security incidents can — and often should — drive emergency change process invocations.
Reflection Question: An organization has a 30-day change cycle for all production patches. A critical vulnerability with a CVSS score of 9.8 is announced on Monday morning, with confirmed active exploitation in the wild within 48 hours. The patch is available immediately from the vendor. Walk through how the emergency change process should work in this scenario, who needs to be involved, and what documentation should be completed.