Splunk Search

Why "$SPLUNK_HOME/var/run" in my environment had temporarily huge size.

yutaka1005
Builder

"$ SPLUNK_HOME / var / run" in my Splunk environment gradually increased from 15:00 PM on 2017/6/5 to 2017/6/6 09: 00 AM, eventually from 22 M to 42 G It temporarily expanded. And then backed to original size at 2017/6/6 09:10 AM.

My Splunk environment is 6.4.1, is it due to the following known issues?

2014-10-28 SPL - 91778 Dispatch disk usage incorrectly includes temporary CSV result files for large event searches, which can lead to job queuing.

Does anyone know something?

0 Karma
1 Solution

woodcock
Esteemed Legend

Generally speaking, this is normal. The result of EVERY search hangs around for a certain Time To Live (TTL) and the dumping ground is the dispatch directory which lives in $SPLUNK_HOME/var/run. If you do many searches or several searches with HUGE numbers of results then you will be taking up large amounts of disk space in dispatch and then suddenly the search artifacts will be reaped when the TTL expires and "magically" you will see all of your disk space come back.

You can recreate this (don't try this at home) by doing an All time search for index=* OR index=_*. This will attempt to take all of the data from your indexers and store it on your search head. You will probably run out of RAM on your search head before you run out of disk but the file that results will be gargantuan. This is why we generally remove the All time option from times.conf.

View solution in original post

0 Karma

woodcock
Esteemed Legend

Generally speaking, this is normal. The result of EVERY search hangs around for a certain Time To Live (TTL) and the dumping ground is the dispatch directory which lives in $SPLUNK_HOME/var/run. If you do many searches or several searches with HUGE numbers of results then you will be taking up large amounts of disk space in dispatch and then suddenly the search artifacts will be reaped when the TTL expires and "magically" you will see all of your disk space come back.

You can recreate this (don't try this at home) by doing an All time search for index=* OR index=_*. This will attempt to take all of the data from your indexers and store it on your search head. You will probably run out of RAM on your search head before you run out of disk but the file that results will be gargantuan. This is why we generally remove the All time option from times.conf.

0 Karma

yutaka1005
Builder

Thank you woodcock,

I understood it is normal operation.
And I'm thinking about avoiding this issue by forbidding all time searching or by restricting indexes that can be searched for each user.

Finally I would like to ask you a question, can a similar phenomenon occur if I do real-time search?

0 Karma

woodcock
Esteemed Legend

I also forbid real-time searches for other reasons but I am pretty sure that those are mostly a RAM/CPU problem, not a disk problem (at least not on the search head).

0 Karma

yutaka1005
Builder

Thanks a lot !

I got it.

0 Karma
Get Updates on the Splunk Community!

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...