- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am looking for an example of dispatching a saved search job with custom latest and earliest boundaries.
A bit of history: my python program finds a Saved Search by its name and instantiates a job via .dispatch() command [1].
The .dispatch() method supports two ways of transferring parameters – via *args.* * and *dispatch.* *
It seems as *args.* * would require modification of the saved search query itself;
Following *dispatch.* * parameters, however, look promising:
- dispatch.latest_time
- dispatch.earliest_time
- dispatch.time_format
Does anybody using those in their Python programs?
[1] http://dev.splunk.com/view/python-sdk/SP-CAAAEK2#runsaved
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Below is the approach I used to dispatch/run a saved search job by its name with a custom latest and earliest boundaries.
NO changes are required to the saved search code/configuration/etc.
In the case below, the date-time is represented in epoch, so the format is set to "%s":
import time
import splunklib.client as client
import splunklib.results as results
def _run_job(job: client.Job):
# small delay to sync server and client
time.sleep(2)
# Wait for the job to finish--poll for completion and display stats
is_done = False
while not is_done:
job.refresh()
time.sleep(10.0)
is_done = job.is_done()
output = list()
rr = results.ResultsReader(job.results())
for result in rr:
if isinstance(result, results.Message):
# Diagnostic messages may be returned in the results
print('Diagnostic message {0}: {1}'.format(result.type, result.message))
elif isinstance(result, dict):
# Normal events are returned as dicts
output.append(result)
return output
def get(name):
connection_kwargs = {
'host': 'your_host_ip',
'username': 'your username',
'password': 'your password',
}
service = client.connect(**connection_kwargs)
return service.saved_searches[name, 'YOUR_APP_NAMESPACE']
def run(name, **kwargs):
saved_search = get(name)
job = saved_search.dispatch(**kwargs)
print('Dispatched Splunk Search Job <{0}> with params {1}'.format(name, kwargs))
return _run_job(job)
def main():
kwargs = {
'dispatch.latest_time': end_epoch,
'dispatch.earliest_time': start_epoch,
'dispatch.time_format': '%s',
}
result = run('YOUR_SEARCH_NAME', **kwargs)
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Below is the approach I used to dispatch/run a saved search job by its name with a custom latest and earliest boundaries.
NO changes are required to the saved search code/configuration/etc.
In the case below, the date-time is represented in epoch, so the format is set to "%s":
import time
import splunklib.client as client
import splunklib.results as results
def _run_job(job: client.Job):
# small delay to sync server and client
time.sleep(2)
# Wait for the job to finish--poll for completion and display stats
is_done = False
while not is_done:
job.refresh()
time.sleep(10.0)
is_done = job.is_done()
output = list()
rr = results.ResultsReader(job.results())
for result in rr:
if isinstance(result, results.Message):
# Diagnostic messages may be returned in the results
print('Diagnostic message {0}: {1}'.format(result.type, result.message))
elif isinstance(result, dict):
# Normal events are returned as dicts
output.append(result)
return output
def get(name):
connection_kwargs = {
'host': 'your_host_ip',
'username': 'your username',
'password': 'your password',
}
service = client.connect(**connection_kwargs)
return service.saved_searches[name, 'YOUR_APP_NAMESPACE']
def run(name, **kwargs):
saved_search = get(name)
job = saved_search.dispatch(**kwargs)
print('Dispatched Splunk Search Job <{0}> with params {1}'.format(name, kwargs))
return _run_job(job)
def main():
kwargs = {
'dispatch.latest_time': end_epoch,
'dispatch.earliest_time': start_epoch,
'dispatch.time_format': '%s',
}
result = run('YOUR_SEARCH_NAME', **kwargs)
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Hi @mushkevych,
You're on the right page. Simply use this link if you want to modify earliest and latest :
http://dev.splunk.com/view/python-sdk/SP-CAAAEK2#viewpropssaved
Code is already on the page. This is the snippet you need :
# Retrieve the new search
mysavedsearch = service.saved_searches["Test Search"]
# Specify a description for the search
# Enable the saved search to run on schedule
# Run the search on Saturdays at 4:15am
# Search everything in a 24-hour time range starting June 19, 12:00pm
kwargs = {"description": "This is a test search",
"is_scheduled": True,
"cron_schedule": "15 4 * * 6",
"earliest_time": "2014-06-19T12:00:00.000-07:00",
"latest_time": "2014-06-20T12:00:00.000-07:00"}
# Update the server and refresh the local copy of the object
mysavedsearch.update(**kwargs).refresh()
# Print the properties of the saved search
print "Description: ", mysavedsearch["description"]
print "Is scheduled: ", mysavedsearch["is_scheduled"]
print "Cron schedule: ", mysavedsearch["cron_schedule"]
print "Next scheduled time: ", mysavedsearch["next_scheduled_time"]
Cheers,
David
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@DavidHourani
Thank you for reply.
My main concern with this approach is that it updates the server side instance of the saved search.
I would like to keep the saved search on the server side "as-is", and call it with custom parameters.
Moreover, i am now wondering if Splunk supports execution of multiple concurrent saved searches with the same name.
Dan
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Hi @mushkevych, in that case you're question is easier than I thought. If you don't want to modify the saved search you can use the savedsearch
command : https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Savedsearch
Note :
-If you specify All Time in the time range picker, the savedsearch command uses the time range that was saved with the saved search.
-If you specify any other time in the time range picker, the time range that you specify overrides the time range that was saved with the saved search.
So you simply have to call the savedsearch
command from your script and that will allow you to use your existing search. Does that answer your question ?
