I have a particular use that requires very long subsearches, running potentially for 15 minutes. Of course, my subsearch normally auto-finalizes after 60 seconds. I modified my limits.conf as such:
[subsearch] # maximum number of results to return from a subsearch maxout = 100000 # maximum number of seconds to run a subsearch before finalizing maxtime = 1500 # time to cache a given subsearch's results ttl = 75000
But now, whenever I run my search, it times out with the following error:
Timed out waiting for status to become available on job=1305920559.2
When I remove the lines from $SPLUNK_HOME/etc/system/local/limits.conf, it goes back to autofinalizing.
Any ideas? I'm running 4.2.0.
I second @araitz, we really need to see your subsearch to comment. Actually, we would also need to know how long it takes for that subsearch to complete if it is run as a stand-alone search.
I"m having the same problem with my subsearch. Here is my search. Its fairly simple stuff. Sorry if my search censorship is too confusing.
There's a funny quirk in the search API around 204's. When you ask for a resource, you're supposed to get back a 200 and the data, but sometimes splunkd isnt quite ready yet. so what it'll give back is a 204. The client then has to re-request periodically until the actual content becomes available.
This stuff is built into the python SDK (aka "/opt/splunk/lib/python2.6/site-packages/splunk/search/init.py") so I dont know how commonly known it is.
At any rate, it seems like the presence of the long subsearch is triggering a really really long sequence of polling and 204 responses, and the SDK is giving up. Since this is the python SDK it probably knows nothing about limits.conf and I suspect it wasnt designed to accommodate this use case.
So what's with the crazy long subsearches? Is there really no other way? Is this posted as a separate question elsewhere? You know we love a challenge. 😃
This looks like a Splunk Bug to me, there has to be a fix for this, a Sub-search timing OUT! We can do so much with Sub-Searches, we cant just let Splunk ignore this issue, there were soo Many Times I would've liked to use Sub-searches in Splunk to create Summary Indexes, But They keep timing out!
Hmmm. Interesting. This seems to be timing out after just a minute and a half or so. I'm sure that subsearches have been done longer than that, before! Would splunkd be returning status before the search completes, but it's just hanging on that process?
As for the search, there are many better ways, and most of them we do regularly. What I'm trying to do is provide the end users an ability to run exactly the search they normally do with summary indexes off the raw data, if they need to. (i.e., "It'll take 15 minutes, but you know it will be authoritatively accurate.")