Splunk Search

AppendCols subsearch auto-finalize ignoring maxtime in limits.conf

bnolen
Path Finder

In my system/local/limits.conf I have have following settings

[subsearch]
maxout = 100000
maxtime = 1000
timeout = 3000
ttl = 3000

[join]
subsearch_maxout = 50000
subsearch_maxtime = 600
subsearch_timeout = 120

In spite of this, I still get the following message when using an appendcols subsearch:

[subsearch]: Search auto-finalized after time limit reached (30 seconds). Results may be incomplete.

I can not find anywhere in the config files where this 30 second limit is specified or can be overridden. I had thought that it would inherit the settings from subsearch in limits.conf

The search I am using, in case that may be the cause is:

squid_action=BLOCK | stats count BY _time AS Sites_Blocked | appendcols [search squid_action=AUTHOVERRIDE | stats count BY _time AS Sites_Overridden]
0 Karma
1 Solution

gkanapathy
Splunk Employee
Splunk Employee

And your search would work a lot better and faster, and without any subsearch problems as:

squid_action=BLOCK OR squid_action=AUTHOVERRIDE 
| stats 
    count(eval(squid_action=BLOCK)) as Sites_Blocked
    count(eval(squid_action=AUTHOVERRIDE)) as Sites_Overridden
  by _time

Most of the time a query using stats if possible will work much better than a similar query using join, and using OR if possible will work better than using append.

View solution in original post

deeboh
Path Finder

I'd like to piggy back off of this post. I apologize if this is bad netiquette. Can someone suggest how to use stats rather than append for this search? I stole the structure from the deployment app

index=_internal source=*license_usage.log pool="auto_generated_pool_enterprise" earliest=@d| eval GB=b/1024/1024/1024 | eventstats sum(GB) by pool | timechart partial=f span=30m per_hour(GB) as GBph | eval marker = "today" | eval _time = _time+1800 | append maxtime=600 maxout=1000000000 timeout=600 [ search index=_internal source=*license_usage.log pool="auto_generated_pool_enterprise" earliest=-1d@d-30m latest=@d-30m| eval GB=b/1024/1024/1024 | eventstats sum(GB) by pool | timechart partial=f span=30m per_hour(GB) as GBph | eval marker = "Yesterday" | eval _time = _time+86400*1+1800 ] | timechart median(GBph) by marker

Sorry I had to use an answer post because the comment text box is too limiting.

Thanks in advance,

Curtis

0 Karma

hjwang
Contributor

I have the same problem and limits.conf is as follows

system     [subsearch]
system     maxout = 10000
system     maxtime = 600
system     ttl = 600

but splunk also display [subsearch]: Search auto-finalized after time limit reached (30 seconds). Results may be incomplete. i really doubt this 30 seconds means what?because this value is not in subsearch stanza

0 Karma

hjwang
Contributor

i think i got a temp workaround. the appendcols itself has its limits in subsearch time. so i use appendcols maxtime=600 then i erase this situation. hope this can help those who meet the same problem

0 Karma

gkanapathy
Splunk Employee
Splunk Employee

And your search would work a lot better and faster, and without any subsearch problems as:

squid_action=BLOCK OR squid_action=AUTHOVERRIDE 
| stats 
    count(eval(squid_action=BLOCK)) as Sites_Blocked
    count(eval(squid_action=AUTHOVERRIDE)) as Sites_Overridden
  by _time

Most of the time a query using stats if possible will work much better than a similar query using join, and using OR if possible will work better than using append.

gkanapathy
Splunk Employee
Splunk Employee

The [subsearch] maxtime setting should take effect and allow your subsearch to run longer. Possibly there is a problem with the global system config that you can use ./splunk cmd btool --debug limits list subsearch might be able to help indicate?

0 Karma

gkanapathy
Splunk Employee
Splunk Employee

./splunk cmd btool --debug limits list subsearch

0 Karma

bnolen
Path Finder
bin]# ./splunk cmd btool --debug list subsearch
Invalid command: subsearch
Usage:
 btool [options] CONF_FILE {list|layer|check|add|delete} [stanza [key]]
Options:
 --debug
 --debug-print=[user|app|stanza]
 --user=SPLUNK_USER
 --app=SPLUNK_APP
 --dir=etc DIR
0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.

Can’t make it to .conf25? Join us online!

Get Updates on the Splunk Community!

Can’t Make It to Boston? Stream .conf25 and Learn with Haya Husain

Boston may be buzzing this September with Splunk University and .conf25, but you don’t have to pack a bag to ...

Splunk Lantern’s Guide to The Most Popular .conf25 Sessions

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...

Unlock What’s Next: The Splunk Cloud Platform at .conf25

In just a few days, Boston will be buzzing as the Splunk team and thousands of community members come together ...