Splunk Search

List future or historical run times of scheduled searches

mikaelbje
Motivator

I'm trying to create a timeline using the Timeline Custom Visualization of future or historical saved searches in order to get an overview of when the searches are run as well as their duration. This will help me spread the search run times to avoid multiple searches executing at the same time and it will help our users understand when their searches will run.

The historical part is easy to get set up using this search:


index=_audit savedsearch_name!="" info=completed | eval duration=total_run_time | stats count by _time, savedsearch_name, user, duration | table _time savedsearch_name, user, duration

alt text

However I'd also like to parse the Cron Schedule of Schedued Searches. I can get the next scheduled time of searches using this search:

| rest /servicesNS/-/-/saved/searches  | search is_scheduled=1  | table title,     cron_schedule next_scheduled_time eai:acl.owner  actions eai:acl.app action.email action.email.to dispatch.earliest_time dispatch.latest_time search *

But I'm looking for a way to generate future run times for all searches, not just the next run time. I found a Perl script (https://github.com/waldner/croncal/blob/master/croncal.pl) that let's me do this by passing it a crontab or something resembling a cron tab such as

and the output is:

bash-3.2$ cat crons.txt | perl croncal.pl -s '2017-12-15 00:00' -e '2017-12-15 00:10';
2017-12-15 00:00|Broken Hosts Sanity Check
2017-12-15 00:00|Stlment - new TXN - med feil
2017-12-15 00:00|XX Payment Errors - BETA
2017-12-15 00:00|IS Alerts
2017-12-15 00:00|JMSServiceGateway feiler
2017-12-15 00:01|Stlment - new TXN - med feil
2017-12-15 00:02|Stlment - new TXN - med feil
2017-12-15 00:03|Stlment - new TXN - med feil
2017-12-15 00:04|Stlment - new TXN - med feil
2017-12-15 00:05|Stlment - new TXN - med feil
2017-12-15 00:05|XX Payment Errors - BETA
2017-12-15 00:05|si-msexchange-internet-mail
2017-12-15 00:06|Stlment - new TXN - med feil
2017-12-15 00:07|Stlment - new TXN - med feil
2017-12-15 00:08|Stlment - new TXN - med feil
2017-12-15 00:09|Stlment - new TXN - med feil

but I haven't been able to modify it to be used as en external command in Splunk. Perhaps there is built-in functionality in Splunk already to set a start time and end time and get future run times as events within that interval.

Any hints? I'm sure this will prove useful to others as well for capacity planning and distribution.

Tags (1)
0 Karma
1 Solution

sideview
SplunkTrust
SplunkTrust

The following SPL forecasts out, for each scheduled search, when all of the future scheduled runs will occur. Right now you'll notice a @d and +1d@d deep inside it, making it focus on the current day, but.... how it interacts with your search time range is perhaps a little odd. read on.

At the end it creates a _time field using these future times so you can pipe the result into timechart and thus, look into the future a bit.

It goes a lot further though. It also looks back in time at the past performance of the given alert (if there is any), and then uses statistics from that to predict how crappy the future runs are each going to be.

This is really kind of a base search.... It puts a great deal of tools and field values in your hands, really more than you would ever need for any single report... By which I mean you can tack on various quite different timechart or chart commands onto the end of this base search to do quite a lot.

-- visualize forecasted impact of upcoming runs, over time or split by anything,
-- visualize aggregate impact of prior runs.

-- Possibly break down the entire current day midnight to midnight with the left side being actual and the right side being forecasted.

In case this needed another disclaimer, it is a work in progress.

index=_audit search_id TERM(action=search) (info=granted OR info=completed)
| transaction search_id startswith=(info=granted) endswith(info=completed)
| where isnotnull(savedsearch_name) AND savedsearch_name!="" AND NOT savedsearch_name=""
| table _time total_run_time event_count scan_count user savedsearch_name search
| append [
  | rest /servicesNS/-/-/saved/searches earliest_time=@d latest_time=+1d@d | table title eai:acl.app cron_schedule scheduled_times | rename title as savedsearch_name eai:acl.app as app | fields savedsearch_name app scheduled_times
]
| fields savedsearch_name app total_run_time event_count scan_count _time search user scheduled_times app
| stats count as runs sum(total_run_time) as total_run_time sum(scan_count) as total_scan_count sum(event_count) as total_event_count values(*) as * by savedsearch_name 
| eval expected_lispy_efficiency=total_event_count/total_scan_count
| eval expected_run_time=total_run_time/runs
| fields - total_run_time event_count scan_count
| mvexpand scheduled_times
| eval _time=scheduled_times

as a simple suggestion for what to play around with first, tack on the end, | timechart count by app
or perhaps | timechart sum(total_scan_count) by app

Good luck.

H/T to @martin_mueller for some great improvements to parts of it, also quite probably to other Splunk Trust folks, as we were discussing this sort of search a while back. (Quite possibly some free tool will come out of this work in the not-too-distant future. Any and all feedback will be greatly appreciated)

View solution in original post

sideview
SplunkTrust
SplunkTrust

The following SPL forecasts out, for each scheduled search, when all of the future scheduled runs will occur. Right now you'll notice a @d and +1d@d deep inside it, making it focus on the current day, but.... how it interacts with your search time range is perhaps a little odd. read on.

At the end it creates a _time field using these future times so you can pipe the result into timechart and thus, look into the future a bit.

It goes a lot further though. It also looks back in time at the past performance of the given alert (if there is any), and then uses statistics from that to predict how crappy the future runs are each going to be.

This is really kind of a base search.... It puts a great deal of tools and field values in your hands, really more than you would ever need for any single report... By which I mean you can tack on various quite different timechart or chart commands onto the end of this base search to do quite a lot.

-- visualize forecasted impact of upcoming runs, over time or split by anything,
-- visualize aggregate impact of prior runs.

-- Possibly break down the entire current day midnight to midnight with the left side being actual and the right side being forecasted.

In case this needed another disclaimer, it is a work in progress.

index=_audit search_id TERM(action=search) (info=granted OR info=completed)
| transaction search_id startswith=(info=granted) endswith(info=completed)
| where isnotnull(savedsearch_name) AND savedsearch_name!="" AND NOT savedsearch_name=""
| table _time total_run_time event_count scan_count user savedsearch_name search
| append [
  | rest /servicesNS/-/-/saved/searches earliest_time=@d latest_time=+1d@d | table title eai:acl.app cron_schedule scheduled_times | rename title as savedsearch_name eai:acl.app as app | fields savedsearch_name app scheduled_times
]
| fields savedsearch_name app total_run_time event_count scan_count _time search user scheduled_times app
| stats count as runs sum(total_run_time) as total_run_time sum(scan_count) as total_scan_count sum(event_count) as total_event_count values(*) as * by savedsearch_name 
| eval expected_lispy_efficiency=total_event_count/total_scan_count
| eval expected_run_time=total_run_time/runs
| fields - total_run_time event_count scan_count
| mvexpand scheduled_times
| eval _time=scheduled_times

as a simple suggestion for what to play around with first, tack on the end, | timechart count by app
or perhaps | timechart sum(total_scan_count) by app

Good luck.

H/T to @martin_mueller for some great improvements to parts of it, also quite probably to other Splunk Trust folks, as we were discussing this sort of search a while back. (Quite possibly some free tool will come out of this work in the not-too-distant future. Any and all feedback will be greatly appreciated)

mikaelbje
Motivator

I have to say that you rock, @sideview! This is really cool! I had no idea there was a built-in scheduled_times field!

The timeline visualization (or rather my browser) has some trouble with the amount of data passed to it, about 10 000 rows. I played a bit with bucket and different time spans but that obviously changes the resolution. Any other ideas how one could visualize this would be appreciated.

Listing the rows as a table as you suggested is very neat. It's great for capacity and downtime planning.

Keep up the good work, Nick!

0 Karma

somesoni2
Revered Legend

You can create a custom search command in python (see this or this) which will take the cron of the saved search (probably fetched from | rest /servicesNS/-/-/saved/searches ) and return few future execution time. You can see following links on how to parse cron expression in python.
https://github.com/josiahcarlson/parse-crontab
https://stackoverflow.com/questions/7390170/parsing-crontab-style-lines

0 Karma

mikaelbje
Motivator

Thanks. What sideview showed actually removed the need to pass it to a custom command as Splunk can already output the next scheduled times! 🙂

0 Karma
Get Updates on the Splunk Community!

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

Get Inspired! We’ve Got Validation that Your Hard Work is Paying Off

We love our Splunk Community and want you to feel inspired by all your hard work! Eric Fusilero, our VP of ...