Getting Data In

REST API Python - Issue with pulling results before search job is done.

sd248011
New Member

I wrote a script in Python to run a search query and return the results. The code to send the search query is:

sid1 = httplib2.Http(disable_ssl_certificate_validation=True).request(baseurl + '/services/search/jobs','POST',
headers={'Authorization': 'Splunk %s' % sessionKey},body=urllib.urlencode({'search': searchQuery1}))[1]

The code to return the results is:

response1 = httplib2.Http(disable_ssl_certificate_validation=True).request(baseurl + '/services/search/jobs/' + slicesid1 + '/results?count=0','GET',
headers={'Authorization': 'Splunk %s' % sessionKey},body=urllib.urlencode({'search': searchQuery1}))[1]

The issue is that if the results request is made before the result query is completed, no results will be returned. I have fiddled with sleep time which allows me to return results but that isn't very efficient as the sleep time can be too long or too short.

I know there is a field called dispatchState that is RUNNING during the search which changes to DONE when it is completed. How can I put some code in after the initial search query that will continuously check to see if the job is RUNNING or DONE and then once DONE, the results query will run?

Tags (1)
0 Karma
1 Solution

Neeraj_Luthra
Splunk Employee
Splunk Employee

You can write a while loop and continue to refresh the job and check for isDone property on the job. Exit when the property value changes to 1.

Checkout the Python SDK and code sample for this while loop at http://dev.splunk.com/view/SP-CAAAEE5#normaljob.

View solution in original post

0 Karma

Neeraj_Luthra
Splunk Employee
Splunk Employee

You can write a while loop and continue to refresh the job and check for isDone property on the job. Exit when the property value changes to 1.

Checkout the Python SDK and code sample for this while loop at http://dev.splunk.com/view/SP-CAAAEE5#normaljob.

0 Karma

Neeraj_Luthra
Splunk Employee
Splunk Employee

How about this ...

kwargs_results = {"count": 0}
search_results = job.results(**kwargs_results)

0 Karma

sd248011
New Member

I was able to use the count argument in my original code at the top and pull all of the results. Now I am using the code in the link you gave me with the python SDK and can't figure out how to add in the count=0 argument. In your link:

for result2 in results.ResultsReader(job.results()):
print result2

The above code is to pull the results now.

0 Karma

Neeraj_Luthra
Splunk Employee
Splunk Employee

Try this:

body=urllib.urlencode({'search': searchQuery1, 'count': 0})

0 Karma

sd248011
New Member

Thanks much that is exactly what I needed. Only problem I have now is it is not returning all of my results. Can you let me know how I would add in the count=0 argument on the results line that is:

for result2 in results.ResultsReader(job.results()):
print result2

0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...