Splunk Search

AppendCols subsearch auto-finalize ignoring maxtime in limits.conf

bnolen
Path Finder

In my system/local/limits.conf I have have following settings

[subsearch]
maxout = 100000
maxtime = 1000
timeout = 3000
ttl = 3000

[join]
subsearch_maxout = 50000
subsearch_maxtime = 600
subsearch_timeout = 120

In spite of this, I still get the following message when using an appendcols subsearch:

[subsearch]: Search auto-finalized after time limit reached (30 seconds). Results may be incomplete.

I can not find anywhere in the config files where this 30 second limit is specified or can be overridden. I had thought that it would inherit the settings from subsearch in limits.conf

The search I am using, in case that may be the cause is:

squid_action=BLOCK | stats count BY _time AS Sites_Blocked | appendcols [search squid_action=AUTHOVERRIDE | stats count BY _time AS Sites_Overridden]
0 Karma
1 Solution

gkanapathy
Splunk Employee
Splunk Employee

And your search would work a lot better and faster, and without any subsearch problems as:

squid_action=BLOCK OR squid_action=AUTHOVERRIDE 
| stats 
    count(eval(squid_action=BLOCK)) as Sites_Blocked
    count(eval(squid_action=AUTHOVERRIDE)) as Sites_Overridden
  by _time

Most of the time a query using stats if possible will work much better than a similar query using join, and using OR if possible will work better than using append.

View solution in original post

deeboh
Path Finder

I'd like to piggy back off of this post. I apologize if this is bad netiquette. Can someone suggest how to use stats rather than append for this search? I stole the structure from the deployment app

index=_internal source=*license_usage.log pool="auto_generated_pool_enterprise" earliest=@d| eval GB=b/1024/1024/1024 | eventstats sum(GB) by pool | timechart partial=f span=30m per_hour(GB) as GBph | eval marker = "today" | eval _time = _time+1800 | append maxtime=600 maxout=1000000000 timeout=600 [ search index=_internal source=*license_usage.log pool="auto_generated_pool_enterprise" earliest=-1d@d-30m latest=@d-30m| eval GB=b/1024/1024/1024 | eventstats sum(GB) by pool | timechart partial=f span=30m per_hour(GB) as GBph | eval marker = "Yesterday" | eval _time = _time+86400*1+1800 ] | timechart median(GBph) by marker

Sorry I had to use an answer post because the comment text box is too limiting.

Thanks in advance,

Curtis

0 Karma

hjwang
Contributor

I have the same problem and limits.conf is as follows

system     [subsearch]
system     maxout = 10000
system     maxtime = 600
system     ttl = 600

but splunk also display [subsearch]: Search auto-finalized after time limit reached (30 seconds). Results may be incomplete. i really doubt this 30 seconds means what?because this value is not in subsearch stanza

0 Karma

hjwang
Contributor

i think i got a temp workaround. the appendcols itself has its limits in subsearch time. so i use appendcols maxtime=600 then i erase this situation. hope this can help those who meet the same problem

0 Karma

gkanapathy
Splunk Employee
Splunk Employee

And your search would work a lot better and faster, and without any subsearch problems as:

squid_action=BLOCK OR squid_action=AUTHOVERRIDE 
| stats 
    count(eval(squid_action=BLOCK)) as Sites_Blocked
    count(eval(squid_action=AUTHOVERRIDE)) as Sites_Overridden
  by _time

Most of the time a query using stats if possible will work much better than a similar query using join, and using OR if possible will work better than using append.

gkanapathy
Splunk Employee
Splunk Employee

The [subsearch] maxtime setting should take effect and allow your subsearch to run longer. Possibly there is a problem with the global system config that you can use ./splunk cmd btool --debug limits list subsearch might be able to help indicate?

0 Karma

gkanapathy
Splunk Employee
Splunk Employee

./splunk cmd btool --debug limits list subsearch

0 Karma

bnolen
Path Finder
bin]# ./splunk cmd btool --debug list subsearch
Invalid command: subsearch
Usage:
 btool [options] CONF_FILE {list|layer|check|add|delete} [stanza [key]]
Options:
 --debug
 --debug-print=[user|app|stanza]
 --user=SPLUNK_USER
 --app=SPLUNK_APP
 --dir=etc DIR
0 Karma
Get Updates on the Splunk Community!

Splunk Mobile: Your Brand-New Home Screen

Meet Your New Mobile Hub  Hello Splunk Community!  Staying connected to your data—no matter where you are—is ...

Introducing Value Insights (Beta): Understand the Business Impact your organization ...

Real progress on your strategic priorities starts with knowing the business outcomes your teams are delivering ...

Enterprise Security (ES) Essentials 8.3 is Now GA — Smarter Detections, Faster ...

As of today, Enterprise Security (ES) Essentials 8.3 is now generally available, helping SOC teams simplify ...