The most surprising thing about Splunk Enterprise Security’s notable events is that they aren’t actually events, but rather aggregations of events that have been deemed significant by correlation searches.
Let’s see this in action. Imagine a correlation search designed to catch brute-force login attempts. It might look something like this (simplified):
index=linux sourcetype=linux_secure action=failure
| stats count by user, src_ip
| where count > 10
| eval severity=5
| eval description="Multiple failed login attempts detected for user from source IP."
| fields user, src_ip, severity, description
When this search runs and finds a user and src_ip combination with more than 10 failed login events within the search window, it doesn’t just return those raw events. Instead, it generates a single "Notable Event." This notable event will contain the user, src_ip, severity (set to 5 in this case), and the description. This aggregation is what you see in the "Incident Review" dashboard in Enterprise Security.
Here’s the mental model:
- Data Ingestion: Splunk collects logs from various sources (servers, firewalls, applications).
- Correlation Searches: Enterprise Security comes with pre-built correlation searches, and you can write your own. These searches run periodically against your indexed data. Their purpose is to identify patterns that indicate a potential security incident.
- Notable Event Generation: When a correlation search’s conditions are met, it doesn’t just show you the raw data. It creates a "Notable Event." This is a structured record that summarizes the finding. It includes fields like
severity,description,owner,status, and crucially, a link back to the original events that triggered it. - Incident Review: The "Incident Review" dashboard is the central hub for triaging these notable events. It’s where security analysts go to see what’s happening. You can filter, sort, assign ownership, change status (e.g., "New," "In Progress," "Closed"), and add comments.
- Investigation: From the Incident Review, you can drill down into a notable event. This takes you to a search that shows you the original events that were aggregated to create that notable event. This is where the real forensic work happens – examining the context, timestamps, and details of the individual events.
The key levers you control are:
- Correlation Searches: This is your primary tool. You tune existing searches or create new ones to detect the specific threats relevant to your environment. You adjust thresholds, logic, and the fields they extract.
- Severity and Risk Scoring: You assign severity levels (e.g., 1-10) to notable events, which helps prioritize investigations. Enterprise Security also has a risk-based alerting framework that can dynamically adjust risk scores based on multiple detections.
- Triage Workflow: You define how your team interacts with notable events. This includes setting up assignment rules, escalation policies, and defining what "closed" means for different types of incidents.
- Data Model Acceleration: For correlation searches to run efficiently, especially on large datasets, ensuring your data models are accelerated is critical. This pre-processes data for faster querying.
When you’re looking at a notable event in Incident Review and click to "View Events," Splunk is running a search behind the scenes that filters your indexed data based on the criteria defined in the correlation search that generated the notable event. It’s not just showing you a static snapshot; it’s re-querying your data to provide the most up-to-date context for the events that matched.
The next step after mastering triage is often automating response actions based on the severity and type of notable event.