- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Issue with Splunk ITSI Maintenance Window Not Suppressing Alerts
"Hello Team,
I have created a Maintenance Window in Splunk ITSI to suppress alerts from certain correlation searches. However, despite the Maintenance Window being active, we are still receiving alerts. What could be the possible reasons for this, and how can we troubleshoot and resolve the issue?"
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

@SHEBHADAYANA - Can you please share the details on what you configured exactly and what alert did you receive??
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The issue is:
We are Creating Maintenance Window on the Basis on Entities( Servers). But still we are Receiving Incidents e.g. Server Reboot, Server Stopped during the Activity Window. We do not need actions rules which are defined in NEAP for Correlation Searches such as SNOW incidents or Email Notifications for the entities added in the Maintenance Window during the defined time frame of the Maintenance Window. Can you Please help us with this issue. Attaching Snapshots for your reference.
Regards,
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
hello thanks for the reply.
correlation search is :
index=pg_idx_windows_data source=XmlWinEventLog:System sourcetype=XmlWinEventLog Name='Microsoft-Windows-Kernel-Boot'
| join host [ search index=pg_idx_windows_data source=operatingsystem sourcetype=WinHostMon]
| eval Server = upper(host)
|join Server
[ inputlookup pg_ld_production_servers
| rename Site AS Plant
| fields Plant Server SNOW_Location_Name Disable_Alert facility_name site_id wave snow_business_service snow_service_offering SNOW_assignment_group]
| search Disable_Alert = 0
| fields - Disable_Alert
| dedup host
| eval Reboot_Time_EST = strftime(_time, "%Y-%m-%d %I:%M:%S:%p")
| eval Reboot_Site_Time = substr(LastBootUpTime,1,4) + "-" + substr(LastBootUpTime,5,2) + "-" + substr(LastBootUpTime,7,2) + " " + substr(LastBootUpTime,9,2) + ":" + substr(LastBootUpTime,11,2) + ":" + substr(LastBootUpTime,13,2)
| table Plant Server Type Reboot_Time_EST Reboot_Site_Time
| sort by Site Server
| eval itsiSeverity = 5
| eval itsiStatus = 2
| eval itsiTower = "MFG"
| eval itsiAlert = "Proficy Server Reboot Alert in last 15 minutes"
| rename SNOW_Location_Name AS Location
| eval n=now()
| eval url_start_time = n - (1 * 24 * 3600)
| eval url_end_time = n + (1 * 24 * 3600)
| eval episode_url1 = "https://itsi-pg-mfg-splunk-prod.splunkcloud.com/en-US/app/itsi/itsi_event_management?earliest=".url_start_time."&latest=".url_end_time."&dedup=true&filter="
| eval episode_url1=episode_url1."%5B%7B%22label%22%3A%22Episode%20Id%22%2C%22id%22%3A%22itsi_group_id%22%2C%22value%22%3A"
| eval episode_url2="%2C%22text%22%3A"
| eval episode_url3="%7D%5D"
| fields - n url_start_time url_end_time
``` ELK fields```
| eval alert_name = "PG-GLOBAL-Proficy-Server-Reboot-ALERT"
| eval facility_type = "Site"
| eval facility_area = "Manufacturing"
| eval snow_location = Location
| eval application_name = "Proficy Plant Applications"
| eval application_id = "CI000008099"
| eval name_space = "Manufacturing"
| eval snow_configuration_item = "Proficy Plant Applications"
| eval snow_incident_type = "Design: Capacity Overutilization"
| eval snow_category = "Business Application & Databases"
| eval snow_subcategory = "Monitoring"
| eval snow_is_cbp_impacted = "Yes"
| eval alert_severity = "High"
| eval alert_urgency = "High"
| eval snow_severity = "1"
| eval snow_urgency = "2"
| eval snow_impact = "2"
| eval primary_property = "hostname"
| eval secondary_property = "alert_name"
| eval source_system = "splunk"
| eval stage = "Prod"
| eval snow_contact_type = "Auto Ticket"
| eval hostname = Server
| eval app_component = ""
| eval app_component_ID = ""
| eval status = "firing"
| eval correlation_rule = "application_id, site_id, facility_name, hostname, infrastructure_type"
| eval actionability_type = "incident"
| eval alert_actionable = "true"
| eval uc_environment = "sandbox"
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Did you implement this?
In the Advanced Options, configure the throttling settings and select the duration (in seconds) to suppress alerts. Throttling prevents this correlation search from generating duplicate notable events or alerts for the same issue every time it runs.
- If you apply grouping to one or more fields, throttling will be enforced on each unique combination of field values.
- For example, setting throttling by host once per day ensures that only one notable event of this type is generated per server per day.
How can we suppress notable events in Splunk ITSI?
https://community.splunk.com/t5/Splunk-ITSI/How-to-suppress-Notable-Events-in-ITSI/m-p/610503
