Monitoring Splunk

How to find problematic searches (scheduled, real-time, etc) that are affecting performance in our shared Splunk environment?


As our shared Splunk environment matures, we're trying to build in some checks to make sure everyone is being a good citizen and not running searches that can create large impacts to others.

On my checklist are the following:

  • Realtime saved searches
  • Saved searches with short schedules (every minute)
  • Saved searches over very large time ranges
  • Saved searches searches that take a very long time to execute
  • (anything else that should be here?)

I've been able to identify most of these by doing a recursive grep through the etc directory on the search head, looking for specific entries in savedsearches.conf. However, the process is somewhat clunky and I have a feeling there data is already somewhere in Splunk, I just don't know about it.

I don't want to start imposing restrictions on the roles level (yet), but at the very least I'd like to be able to set up an alert to myself and the other Splunk admins notifying us of when a user saves or schedules a possibly problematic search.


I have been using the Splunk On Splunk app for finding problematic searches, etc.. and it's been working great.

0 Karma
Get Updates on the Splunk Community!

.conf24 | Day 0

Hello Splunk Community! My name is Chris, and I'm based in Canberra, Australia's capital, and I travelled for ...

Enhance Security Visibility with Splunk Enterprise Security 7.1 through Threat ...

 (view in My Videos)Struggling with alert fatigue, lack of context, and prioritization around security ...

Troubleshooting the OpenTelemetry Collector

  In this tech talk, you’ll learn how to troubleshoot the OpenTelemetry collector - from checking the ...