All Apps and Add-ons

limits.conf default_save_ttl = 604800 (7 days) but in /opt/splunk/var/run/splunk/dispatch there are folders older than 7

lim2
Communicator
All,
2 Splunk admin questions:
1) We have default_save_ttl = 604800 (7 days), but in /opt/splunk/var/run/splunk/dispatch there are folders older than 7 days according to https://docs.splunk.com/Documentation/Splunk/8.0.5/Search/ManagejobsfromtheOS
the ttl, or length of time that job's artifacts (the output it produces) will remain on disk and available (ttl=)
 
Should we cleanup the old searches manually via cron, is that safe? 
find  /opt/splunk/var/run/splunk/dispatch/  -maxdepth 1 -type d -mtime +8 -ls|wc -l
37
 
Here is outputs of limits.conf:
/opt/splunk/bin/splunk cmd btool limits list --debug|grep ttl
/opt/splunk/etc/system/default/limits.conf indexed_csv_ttl = 300
/opt/splunk/etc/system/default/limits.conf search_ttl = 2p
/opt/splunk/etc/system/default/limits.conf concurrency_message_throttle_time = 10m
/opt/splunk/etc/system/default/limits.conf max_lock_file_ttl = 86400
/opt/splunk/etc/system/default/limits.conf cache_ttl = 300
/opt/splunk/etc/system/default/limits.conf default_save_ttl = 604800
/opt/splunk/etc/system/default/limits.conf failed_job_ttl = 86400
/opt/splunk/etc/system/default/limits.conf remote_ttl = 600
/opt/splunk/etc/system/default/limits.conf replication_file_ttl = 600
/opt/splunk/etc/system/default/limits.conf srtemp_dir_ttl = 86400
/opt/splunk/etc/system/default/limits.conf ttl = 600
/opt/splunk/etc/system/default/limits.conf ttl = 300
/opt/splunk/etc/system/default/limits.conf cache_ttl_sec = 300
/opt/splunk/etc/system/default/limits.conf ttl = 86400
 
2) Also another question regarding from ps aux |grep 1598277129
root 29785 106 0.3 764344 282356 ? Sl 13:51 (UTC) 114:48 [splunkd pid=1771] search --id=1598277129.14351_B930C604-9D78-4B47-8E19-429E50F02A65 --maxbuckets=300 --ttl=600 --maxout=500000 --maxtime=0 --lookups=1 --reduce_freq=10 --rf=* --user=redacted --pro --roles=redacted
the approve Splunk search process started/completed and have a ttl of 600 (10 minutes),
and from search.log can see CANCEL and status=3, why is the search still running if CANCEL was issued? We do see quite a few of those cases and the CPU load is usually high in 8.0.2. Did not have that condition in Splunk 7.2.x. Any inputs?
 
08-24-2020 15:47:12.345 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=CANCEL
08-24-2020 15:47:12.345 INFO DispatchExecutor - User applied action=CANCEL while status=3
...
08-24-2020 15:47:14.344 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=CANCEL
08-24-2020 15:47:14.345 INFO DispatchExecutor - User applied action=CANCEL while status=3

Thanks.
Labels (2)
0 Karma
Get Updates on the Splunk Community!

Webinar Recap | Revolutionizing IT Operations: The Transformative Power of AI and ML ...

The Transformative Power of AI and ML in Enhancing Observability   In the realm of IT operations, the ...

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...