<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Data in /opt/splunk/var/spool/splunk filling up disk in Splunk Search</title>
    <link>https://community.splunk.com/t5/Splunk-Search/Data-in-opt-splunk-var-spool-splunk-filling-up-disk/m-p/44894#M179237</link>
    <description>&lt;P&gt;I'm seeing a number of very large files building up in /opt/splunk/var/spool/splunk:  &lt;/P&gt;

&lt;P&gt;drwx------ 2 root root      4096 Feb 27 02:08 .&lt;BR /&gt;&lt;BR /&gt;
drwx--x--x 4 root root      4096 Feb  7 23:12 ..&lt;BR /&gt;&lt;BR /&gt;
-rw------- 1 root root 360903734 Feb 27 01:28 1400673619_events.stash_new&lt;BR /&gt;&lt;BR /&gt;
-rw------- 1 root root 372663350 Feb 27 01:53 1504785327_1400673619_events.stash_new&lt;BR /&gt;&lt;BR /&gt;
-rw------- 1 root root 375269359 Feb 27 02:03 157257541_1400673619_events.stash_new&lt;BR /&gt;&lt;BR /&gt;
-rw------- 1 root root 373008730 Feb 27 01:43 1750025097_1400673619_events.stash_new&lt;BR /&gt;&lt;BR /&gt;
-rw------- 1 root root 359388989 Feb 27 02:08 1874146970_1400673619_events.stash_new&lt;BR /&gt;&lt;BR /&gt;
-rw------- 1 root root 355854760 Feb 27 01:38 314379920_1400673619_events.stash_new&lt;BR /&gt;&lt;BR /&gt;
-rw------- 1 root root 375817381 Feb 27 01:33 314379920_events.stash_new&lt;BR /&gt;&lt;BR /&gt;
-rw------- 1 root root 372663350 Feb 27 01:48 357150606_1400673619_events.stash_new&lt;BR /&gt;&lt;BR /&gt;
-rw------- 1 root root 353926431 Feb 27 01:58 378307516_1400673619_events.stash_new&lt;/P&gt;

&lt;P&gt;Is there any way I can configure Splunk so it removes them automatically or times them out?  I saw an error message in the GUI that says Splunk reached the minimum disk limit for that directory.  Is that value configurable?  What is the impact on Splunk when that threshold is hit?&lt;/P&gt;

&lt;P&gt;Thx.&lt;/P&gt;

&lt;P&gt;Craig&lt;/P&gt;</description>
    <pubDate>Wed, 27 Feb 2013 02:33:09 GMT</pubDate>
    <dc:creator>responsys_cm</dc:creator>
    <dc:date>2013-02-27T02:33:09Z</dc:date>
    <item>
      <title>Data in /opt/splunk/var/spool/splunk filling up disk</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Data-in-opt-splunk-var-spool-splunk-filling-up-disk/m-p/44894#M179237</link>
      <description>&lt;P&gt;I'm seeing a number of very large files building up in /opt/splunk/var/spool/splunk:  &lt;/P&gt;

&lt;P&gt;drwx------ 2 root root      4096 Feb 27 02:08 .&lt;BR /&gt;&lt;BR /&gt;
drwx--x--x 4 root root      4096 Feb  7 23:12 ..&lt;BR /&gt;&lt;BR /&gt;
-rw------- 1 root root 360903734 Feb 27 01:28 1400673619_events.stash_new&lt;BR /&gt;&lt;BR /&gt;
-rw------- 1 root root 372663350 Feb 27 01:53 1504785327_1400673619_events.stash_new&lt;BR /&gt;&lt;BR /&gt;
-rw------- 1 root root 375269359 Feb 27 02:03 157257541_1400673619_events.stash_new&lt;BR /&gt;&lt;BR /&gt;
-rw------- 1 root root 373008730 Feb 27 01:43 1750025097_1400673619_events.stash_new&lt;BR /&gt;&lt;BR /&gt;
-rw------- 1 root root 359388989 Feb 27 02:08 1874146970_1400673619_events.stash_new&lt;BR /&gt;&lt;BR /&gt;
-rw------- 1 root root 355854760 Feb 27 01:38 314379920_1400673619_events.stash_new&lt;BR /&gt;&lt;BR /&gt;
-rw------- 1 root root 375817381 Feb 27 01:33 314379920_events.stash_new&lt;BR /&gt;&lt;BR /&gt;
-rw------- 1 root root 372663350 Feb 27 01:48 357150606_1400673619_events.stash_new&lt;BR /&gt;&lt;BR /&gt;
-rw------- 1 root root 353926431 Feb 27 01:58 378307516_1400673619_events.stash_new&lt;/P&gt;

&lt;P&gt;Is there any way I can configure Splunk so it removes them automatically or times them out?  I saw an error message in the GUI that says Splunk reached the minimum disk limit for that directory.  Is that value configurable?  What is the impact on Splunk when that threshold is hit?&lt;/P&gt;

&lt;P&gt;Thx.&lt;/P&gt;

&lt;P&gt;Craig&lt;/P&gt;</description>
      <pubDate>Wed, 27 Feb 2013 02:33:09 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Data-in-opt-splunk-var-spool-splunk-filling-up-disk/m-p/44894#M179237</guid>
      <dc:creator>responsys_cm</dc:creator>
      <dc:date>2013-02-27T02:33:09Z</dc:date>
    </item>
    <item>
      <title>Re: Data in /opt/splunk/var/spool/splunk filling up disk</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Data-in-opt-splunk-var-spool-splunk-filling-up-disk/m-p/44895#M179238</link>
      <description>&lt;P&gt;those are summary indexing results.&lt;BR /&gt;
the server is not picking up those files probably because they are considered as binary (check splunkd.log)&lt;/P&gt;

&lt;P&gt;see this answer &lt;A href="http://splunk-base.splunk.com/answers/70072/summary-indexing-blocked-and-binary-file-warning"&gt;http://splunk-base.splunk.com/answers/70072/summary-indexing-blocked-and-binary-file-warning&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 27 Feb 2013 02:52:36 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Data-in-opt-splunk-var-spool-splunk-filling-up-disk/m-p/44895#M179238</guid>
      <dc:creator>yannK</dc:creator>
      <dc:date>2013-02-27T02:52:36Z</dc:date>
    </item>
    <item>
      <title>Re: Data in /opt/splunk/var/spool/splunk filling up disk</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Data-in-opt-splunk-var-spool-splunk-filling-up-disk/m-p/44896#M179239</link>
      <description>&lt;P&gt;I'm not seeing anything about the binary warning in the logs.  I am seeing:&lt;/P&gt;

&lt;P&gt;BatchReader - Could not send data to output queue (stashparsing), retrying...&lt;/P&gt;</description>
      <pubDate>Sat, 27 Jul 2013 20:29:33 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Data-in-opt-splunk-var-spool-splunk-filling-up-disk/m-p/44896#M179239</guid>
      <dc:creator>responsys_cm</dc:creator>
      <dc:date>2013-07-27T20:29:33Z</dc:date>
    </item>
    <item>
      <title>Re: Data in /opt/splunk/var/spool/splunk filling up disk</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Data-in-opt-splunk-var-spool-splunk-filling-up-disk/m-p/44897#M179240</link>
      <description>&lt;P&gt;I'm also seeing:  Metrics - group=queue, name=stashparsing, max_size_kb=500, current_size_kb=449, current_size=7, largest_size=9, smallest_size=3&lt;/P&gt;

&lt;P&gt;Is it possible to increase the size of the stashparsing queue?&lt;/P&gt;</description>
      <pubDate>Mon, 28 Sep 2020 14:26:47 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Data-in-opt-splunk-var-spool-splunk-filling-up-disk/m-p/44897#M179240</guid>
      <dc:creator>responsys_cm</dc:creator>
      <dc:date>2020-09-28T14:26:47Z</dc:date>
    </item>
    <item>
      <title>Re: Data in /opt/splunk/var/spool/splunk filling up disk</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Data-in-opt-splunk-var-spool-splunk-filling-up-disk/m-p/44898#M179241</link>
      <description>&lt;P&gt;Please do not play with the queues size, it will not solve the root cause.&lt;/P&gt;

&lt;P&gt;Your issue is likely that the server (search-head I will bet) is unable to write to it's local indexes, OR is unable to forward to the indexers.&lt;BR /&gt;
Check the indexing queue (the last one before forwarding / disk writing) on the your search-head, then on the indexers if any for any signs of congestion.&lt;/P&gt;</description>
      <pubDate>Sat, 27 Jul 2013 21:49:30 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Data-in-opt-splunk-var-spool-splunk-filling-up-disk/m-p/44898#M179241</guid>
      <dc:creator>yannK</dc:creator>
      <dc:date>2013-07-27T21:49:30Z</dc:date>
    </item>
  </channel>
</rss>

