Monitoring Splunk

How to find problematic searches (scheduled, real-time, etc) that are affecting performance in our shared Splunk environment?


As our shared Splunk environment matures, we're trying to build in some checks to make sure everyone is being a good citizen and not running searches that can create large impacts to others.

On my checklist are the following:

  • Realtime saved searches
  • Saved searches with short schedules (every minute)
  • Saved searches over very large time ranges
  • Saved searches searches that take a very long time to execute
  • (anything else that should be here?)

I've been able to identify most of these by doing a recursive grep through the etc directory on the search head, looking for specific entries in savedsearches.conf. However, the process is somewhat clunky and I have a feeling there data is already somewhere in Splunk, I just don't know about it.

I don't want to start imposing restrictions on the roles level (yet), but at the very least I'd like to be able to set up an alert to myself and the other Splunk admins notifying us of when a user saves or schedules a possibly problematic search.


I have been using the Splunk On Splunk app for finding problematic searches, etc.. and it's been working great.

0 Karma
Get Updates on the Splunk Community!

Starting With Observability: OpenTelemetry Best Practices

Tech Talk Starting With Observability: OpenTelemetry Best Practices Tuesday, October 17, 2023   |  11AM PST / ...

.conf23 | Get Your Cybersecurity Defense Analyst Certification in Vegas

We’re excited to announce a new Splunk certification exam being released at .conf23! If you’re going to Las ...

Streamline Data Ingestion With Deployment Server Essentials

REGISTER NOW! Every day the list of sources Admins are responsible for gets bigger and bigger, often making ...