8.2.3. Threat Intelligence and UEBA
💡 First Principle: Reactive monitoring waits for alerts to fire — threat intelligence inverts the model by telling you what to look for before the attack arrives. Indicators of Compromise (IOCs) identify known-bad artifacts (IP addresses, file hashes, domain names), while Tactics, Techniques, and Procedures (TTPs) describe adversary behavior patterns that persist even when IOCs change. IOCs have short shelf lives; TTPs are durable intelligence.
Threat intelligence levels:
| Level | Audience | Content | Example |
|---|---|---|---|
| Strategic | Executives, board | Threat landscape trends; geopolitical risk | "Nation-state actors targeting financial sector increased 40% this year" |
| Operational | Security managers | Campaign details; attacker motivations and timing | "APT group X is actively targeting organizations using VPN product Y" |
| Tactical | SOC analysts, IR teams | IOCs, detection rules, YARA signatures | "Block hash abc123; monitor for C2 traffic to domain evil.example" |
MITRE ATT&CK framework: The ATT&CK matrix catalogs adversary TTPs organized by tactic (what they are trying to achieve) and technique (how they do it). It serves as a common language between threat intelligence producers and SOC consumers. When a threat intelligence report describes an adversary's playbook using ATT&CK technique IDs (e.g., T1566.001 — Spearphishing Attachment), the SOC can immediately map those techniques to detection rules, evaluate coverage gaps, and prioritize monitoring improvements.
Threat hunting is proactive — analysts form hypotheses about adversary activity and search for evidence in existing telemetry without waiting for an alert. Hunting assumes the adversary is already inside the network and the current detection stack has missed them. Effective hunting requires baseline knowledge of normal behavior (what does legitimate authentication traffic look like?) to identify anomalies that automated systems overlook.
User and Entity Behavior Analytics (UEBA):
UEBA establishes behavioral baselines for users and devices, then applies machine learning to detect deviations that may indicate compromise or insider threat:
| Scenario | Normal Baseline | Anomaly Detected |
|---|---|---|
| Data exfiltration | User downloads 50 MB/day | User downloads 15 GB in 2 hours |
| Compromised account | User logs in from Louisville, KY during business hours | Same user authenticates from Eastern Europe at 3 AM |
| Privilege abuse | Admin accesses 3 sensitive databases per week | Admin queries 40 databases in one evening |
| Lateral movement | Service account authenticates to 2 servers | Service account attempts authentication to 200 servers in 5 minutes |
UEBA reduces alert fatigue by replacing static threshold rules ("alert on >100 failed logins") with context-aware behavioral models that adapt to each user's normal patterns. The tradeoff: UEBA requires a training period to establish baselines, and it generates false positives when users legitimately change behavior (travel, new project assignments, role changes).
⚠️ Exam Trap: Threat intelligence feeds are only valuable if they are operationalized — integrated into SIEM correlation rules, firewall block lists, and hunting hypotheses. An organization that subscribes to premium threat feeds but never configures automated blocking or detection rules has purchased expensive shelf-ware. The exam tests whether you understand the operational integration, not just the concept.
Reflection Question: A threat intelligence report indicates that a nation-state APT group is actively targeting organizations in your industry using spearphishing emails with macro-enabled Excel attachments that establish C2 communications via HTTPS to cloud hosting providers. Describe three specific detection rules you would create in your SIEM, one proactive hunting hypothesis you would investigate, and one preventive control you would implement immediately.