<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Data loss in splunlk in Splunk Dev</title>
    <link>https://community.splunk.com/t5/Splunk-Dev/Data-loss-in-splunlk/m-p/447066#M8127</link>
    <description>&lt;P&gt;Do you have enough disk space to accommodate 35 days worth of data?  Do you have volume settings that allow you to consume that disk space for Splunk's use?  Check out your Monitoring Console to be sure.&lt;/P&gt;</description>
    <pubDate>Sat, 15 Sep 2018 01:51:36 GMT</pubDate>
    <dc:creator>woodcock</dc:creator>
    <dc:date>2018-09-15T01:51:36Z</dc:date>
    <item>
      <title>Data loss in splunlk</title>
      <link>https://community.splunk.com/t5/Splunk-Dev/Data-loss-in-splunlk/m-p/447049#M8110</link>
      <description>&lt;P&gt;we have indexers which are running in clustered environment.we have retention policy 35 days for the all app logs. Now we started missing data.now we see only 10 days of old data.and data missing continuously happening.could you please suggest how to investigate this issue.&lt;/P&gt;</description>
      <pubDate>Sat, 08 Sep 2018 05:38:56 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Dev/Data-loss-in-splunlk/m-p/447049#M8110</guid>
      <dc:creator>shivanandbm</dc:creator>
      <dc:date>2018-09-08T05:38:56Z</dc:date>
    </item>
    <item>
      <title>Re: Data loss in splunlk</title>
      <link>https://community.splunk.com/t5/Splunk-Dev/Data-loss-in-splunlk/m-p/447050#M8111</link>
      <description>&lt;P&gt;ill start by checking the size of your indexes (and even your indexers disk) Splunk will apply retention or size policy, whatever comes first. so if lets say you have 100gb disk available on Indexers and you are indexing 10 GB per day, you will only have retention for 10 days (here its simplified, not calculating compression). &lt;BR /&gt;
so, even if you set your indx time retention to 300 days, it can not hold enough data to keep it&lt;/P&gt;</description>
      <pubDate>Sat, 08 Sep 2018 11:01:54 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Dev/Data-loss-in-splunlk/m-p/447050#M8111</guid>
      <dc:creator>adonio</dc:creator>
      <dc:date>2018-09-08T11:01:54Z</dc:date>
    </item>
    <item>
      <title>Re: Data loss in splunlk</title>
      <link>https://community.splunk.com/t5/Splunk-Dev/Data-loss-in-splunlk/m-p/447051#M8112</link>
      <description>&lt;P&gt;Thank you for reply.each indexes are assigned with 500GB and total we have 43 indexers. retention policy is 35 days.having said that we have 630 GB of limit on the total size of data model acceleration (DMA).below are the valume settings in indexers.conf.&lt;/P&gt;

&lt;H1&gt;VOLUME SETTINGS&lt;/H1&gt;

&lt;P&gt;[volume:hot]&lt;BR /&gt;
path = /SplunkIndexes/HotWarmIndex&lt;BR /&gt;
maxVolumeDataSizeMB = 130000&lt;BR /&gt;
[volume:cold]&lt;BR /&gt;
path = /SplunkIndexes/ColdIndex&lt;BR /&gt;
maxVolumeDataSizeMB = 500000&lt;/P&gt;

&lt;P&gt;below is the total disk space consumed on indexers.&lt;/P&gt;

&lt;P&gt;/dev/mapper/vgsplunkssd-lvsplunkssd&lt;BR /&gt;
                      4.8T  128G  4.5T   3% /SplunkIndexes/HotWarmIndex&lt;BR /&gt;
/dev/mapper/vgsplunksata-lvsplunksata&lt;BR /&gt;
                       14T  489G   13T   4% /SplunkIndexes/ColdIndex&lt;BR /&gt;
can you please suggest us whether we are assigned less amount of volume so in result we are seeing data loss.also suggest us how we can avoide the data loss .we should have 35 days of data as per the requirement. does increasing the volumes fix the issue? please help us in this case.&lt;/P&gt;</description>
      <pubDate>Sat, 08 Sep 2018 17:49:02 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Dev/Data-loss-in-splunlk/m-p/447051#M8112</guid>
      <dc:creator>shivanandbm</dc:creator>
      <dc:date>2018-09-08T17:49:02Z</dc:date>
    </item>
    <item>
      <title>Re: Data loss in splunlk</title>
      <link>https://community.splunk.com/t5/Splunk-Dev/Data-loss-in-splunlk/m-p/447052#M8113</link>
      <description>&lt;P&gt;yes,&lt;BR /&gt;
you really are defining tiny volumes across your 43 indexers&lt;BR /&gt;
~130GB for hot/war volume&lt;BR /&gt;
~500GB for cold volume&lt;BR /&gt;
that explains why you barely see any data on your &lt;CODE&gt;df&lt;/CODE&gt; outputs&lt;BR /&gt;
read here all the way though and modify your indexes.conf accordingly&lt;BR /&gt;
&lt;A href="https://docs.splunk.com/Documentation/Splunk/7.1.2/Indexer/Configureindexstoragesize"&gt;https://docs.splunk.com/Documentation/Splunk/7.1.2/Indexer/Configureindexstoragesize&lt;/A&gt;&lt;BR /&gt;
also try and run this search to see what Splunk tells you as the reason for rolling and verify your configurations down the road:&lt;BR /&gt;
&lt;CODE&gt;index=_internal sourcetype=splunkd component=BucketMover&lt;/CODE&gt;&lt;/P&gt;

&lt;P&gt;hope it helps&lt;/P&gt;</description>
      <pubDate>Sat, 08 Sep 2018 18:14:18 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Dev/Data-loss-in-splunlk/m-p/447052#M8113</guid>
      <dc:creator>adonio</dc:creator>
      <dc:date>2018-09-08T18:14:18Z</dc:date>
    </item>
    <item>
      <title>Re: Data loss in splunlk</title>
      <link>https://community.splunk.com/t5/Splunk-Dev/Data-loss-in-splunlk/m-p/447053#M8114</link>
      <description>&lt;P&gt;I don't see you specifying how much app data you ingest on a daily basis.&lt;/P&gt;</description>
      <pubDate>Sat, 08 Sep 2018 19:59:34 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Dev/Data-loss-in-splunlk/m-p/447053#M8114</guid>
      <dc:creator>Rob2520</dc:creator>
      <dc:date>2018-09-08T19:59:34Z</dc:date>
    </item>
    <item>
      <title>Re: Data loss in splunlk</title>
      <link>https://community.splunk.com/t5/Splunk-Dev/Data-loss-in-splunlk/m-p/447054#M8115</link>
      <description>&lt;P&gt;Try:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;index=_internal sourcetype=splunkd source=*splunkd.log "BucketMover - will attempt to freeze" NOT "because frozenTimePeriodInSecs=" 
| rex field=bkt "(rb_|db_)(?P&amp;lt;newestDataInBucket&amp;gt;\d+)_(?P&amp;lt;oldestDataInBucket&amp;gt;\d+)"
| eval newestDataInBucket=strftime(newestDataInBucket, "%+"), oldestDataInBucket = strftime(oldestDataInBucket, "%+") 
| table message, oldestDataInBucket, newestDataInBucket
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;That is IndexerLevel - Buckets are been frozen due to index sizing from &lt;A href="https://github.com/gjanders/SplunkAdmins/blob/master/default/savedsearches.conf"&gt;git&lt;/A&gt; or the &lt;A href="https://splunkbase.splunk.com/app/3796/"&gt;Alerts for Splunk Admins app&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Sun, 09 Sep 2018 01:13:02 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Dev/Data-loss-in-splunlk/m-p/447054#M8115</guid>
      <dc:creator>gjanders</dc:creator>
      <dc:date>2018-09-09T01:13:02Z</dc:date>
    </item>
    <item>
      <title>Re: Data loss in splunlk</title>
      <link>https://community.splunk.com/t5/Splunk-Dev/Data-loss-in-splunlk/m-p/447055#M8116</link>
      <description>&lt;P&gt;i have report. in which i see 2437GB worth data inducing every week.&lt;/P&gt;</description>
      <pubDate>Sun, 09 Sep 2018 05:14:57 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Dev/Data-loss-in-splunlk/m-p/447055#M8116</guid>
      <dc:creator>shivanandbm</dc:creator>
      <dc:date>2018-09-09T05:14:57Z</dc:date>
    </item>
    <item>
      <title>Re: Data loss in splunlk</title>
      <link>https://community.splunk.com/t5/Splunk-Dev/Data-loss-in-splunlk/m-p/447056#M8117</link>
      <description>&lt;P&gt;Thanks once again. when  i ran the below query i see the below output.&lt;BR /&gt;
index=_internal sourcetype=splunkd component=BucketMover | timechart span=1d count by component&lt;/P&gt;

&lt;P&gt;Can you please tell what it indicates&lt;/P&gt;

&lt;P&gt;_time             BucketMover&lt;BR /&gt;&lt;BR /&gt;
2018-08-30  257&lt;BR /&gt;
2018-08-31  2039&lt;BR /&gt;
2018-09-01  1725&lt;BR /&gt;
2018-09-02  1631&lt;BR /&gt;
2018-09-03  1989&lt;BR /&gt;
2018-09-04  1858&lt;BR /&gt;
2018-09-05  1968&lt;BR /&gt;
2018-09-06  1850&lt;BR /&gt;
2018-09-07  1754&lt;BR /&gt;
2018-09-08  1639&lt;BR /&gt;
2018-09-09  226 &lt;/P&gt;</description>
      <pubDate>Sun, 09 Sep 2018 05:30:12 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Dev/Data-loss-in-splunlk/m-p/447056#M8117</guid>
      <dc:creator>shivanandbm</dc:creator>
      <dc:date>2018-09-09T05:30:12Z</dc:date>
    </item>
    <item>
      <title>Re: Data loss in splunlk</title>
      <link>https://community.splunk.com/t5/Splunk-Dev/Data-loss-in-splunlk/m-p/447057#M8118</link>
      <description>&lt;P&gt;not getting any output for this query&lt;/P&gt;</description>
      <pubDate>Sun, 09 Sep 2018 05:30:50 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Dev/Data-loss-in-splunlk/m-p/447057#M8118</guid>
      <dc:creator>shivanandbm</dc:creator>
      <dc:date>2018-09-09T05:30:50Z</dc:date>
    </item>
    <item>
      <title>Re: Data loss in splunlk</title>
      <link>https://community.splunk.com/t5/Splunk-Dev/Data-loss-in-splunlk/m-p/447058#M8119</link>
      <description>&lt;P&gt;Good, that query advises that buckets are frozen because of size limits. So no results is a good thing&lt;/P&gt;</description>
      <pubDate>Sun, 09 Sep 2018 06:42:31 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Dev/Data-loss-in-splunlk/m-p/447058#M8119</guid>
      <dc:creator>gjanders</dc:creator>
      <dc:date>2018-09-09T06:42:31Z</dc:date>
    </item>
    <item>
      <title>Re: Data loss in splunlk</title>
      <link>https://community.splunk.com/t5/Splunk-Dev/Data-loss-in-splunlk/m-p/447059#M8120</link>
      <description>&lt;P&gt;index=_internal component=BucketMover idx=YourIndexName&lt;/P&gt;

&lt;P&gt;Look for when the data is being rolled with the search above.  See if there are any errors such as "storage full" or "out of disk", or "permission denied", etc.&lt;/P&gt;</description>
      <pubDate>Sun, 09 Sep 2018 12:13:00 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Dev/Data-loss-in-splunlk/m-p/447059#M8120</guid>
      <dc:creator>jkat54</dc:creator>
      <dc:date>2018-09-09T12:13:00Z</dc:date>
    </item>
    <item>
      <title>Re: Data loss in splunlk</title>
      <link>https://community.splunk.com/t5/Splunk-Dev/Data-loss-in-splunlk/m-p/447060#M8121</link>
      <description>&lt;P&gt;i am not seeing any mentioned error. we are suspecting data loss due to the restricted volume on indexer db.below are the configuration.how we can identify that data is getting overidden with earliest data inplace of oldest data.can you please help&lt;/P&gt;</description>
      <pubDate>Sun, 09 Sep 2018 13:12:41 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Dev/Data-loss-in-splunlk/m-p/447060#M8121</guid>
      <dc:creator>shivanandbm</dc:creator>
      <dc:date>2018-09-09T13:12:41Z</dc:date>
    </item>
    <item>
      <title>Re: Data loss in splunlk</title>
      <link>https://community.splunk.com/t5/Splunk-Dev/Data-loss-in-splunlk/m-p/447061#M8122</link>
      <description>&lt;P&gt;figure out which pipeline is full using the monitoring console.&lt;/P&gt;

&lt;P&gt;Look at indexing performance searches, and it should show you the pipelines.&lt;/P&gt;

&lt;P&gt;Could be bad parsing, or it might be time to add indexers.  How many indexers do you have now and what IOPS storage do you have?  To do 2.4TB/day with reference hardware, you'd need about 10 indexers just to handle the input.&lt;/P&gt;</description>
      <pubDate>Sun, 09 Sep 2018 13:23:05 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Dev/Data-loss-in-splunlk/m-p/447061#M8122</guid>
      <dc:creator>jkat54</dc:creator>
      <dc:date>2018-09-09T13:23:05Z</dc:date>
    </item>
    <item>
      <title>Re: Data loss in splunlk</title>
      <link>https://community.splunk.com/t5/Splunk-Dev/Data-loss-in-splunlk/m-p/447062#M8123</link>
      <description>&lt;P&gt;@shivanandbm &lt;BR /&gt;
please see my comment above.&lt;BR /&gt;
fix your indexes.conf according to your needs&lt;/P&gt;</description>
      <pubDate>Sun, 09 Sep 2018 13:29:21 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Dev/Data-loss-in-splunlk/m-p/447062#M8123</guid>
      <dc:creator>adonio</dc:creator>
      <dc:date>2018-09-09T13:29:21Z</dc:date>
    </item>
    <item>
      <title>Re: Data loss in splunlk</title>
      <link>https://community.splunk.com/t5/Splunk-Dev/Data-loss-in-splunlk/m-p/447063#M8124</link>
      <description>&lt;P&gt;Thank you . i am searching logs which tells me my data is getting overwrite.could you pleas tell which logs tells me.&lt;/P&gt;</description>
      <pubDate>Sun, 09 Sep 2018 14:38:44 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Dev/Data-loss-in-splunlk/m-p/447063#M8124</guid>
      <dc:creator>shivanandbm</dc:creator>
      <dc:date>2018-09-09T14:38:44Z</dc:date>
    </item>
    <item>
      <title>Re: Data loss in splunlk</title>
      <link>https://community.splunk.com/t5/Splunk-Dev/Data-loss-in-splunlk/m-p/447064#M8125</link>
      <description>&lt;P&gt;i am searching logs which tells me my data is getting overwrite.could you pleas tell which log tells me.i am sure that latest data is over written by old data&lt;/P&gt;</description>
      <pubDate>Sun, 09 Sep 2018 14:48:33 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Dev/Data-loss-in-splunlk/m-p/447064#M8125</guid>
      <dc:creator>shivanandbm</dc:creator>
      <dc:date>2018-09-09T14:48:33Z</dc:date>
    </item>
    <item>
      <title>Re: Data loss in splunlk</title>
      <link>https://community.splunk.com/t5/Splunk-Dev/Data-loss-in-splunlk/m-p/447065#M8126</link>
      <description>&lt;P&gt;Splunk will crash before it overwrites.&lt;BR /&gt;
 It’s called bucket collision and it’s very bad.&lt;/P&gt;</description>
      <pubDate>Sun, 09 Sep 2018 14:49:09 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Dev/Data-loss-in-splunlk/m-p/447065#M8126</guid>
      <dc:creator>jkat54</dc:creator>
      <dc:date>2018-09-09T14:49:09Z</dc:date>
    </item>
    <item>
      <title>Re: Data loss in splunlk</title>
      <link>https://community.splunk.com/t5/Splunk-Dev/Data-loss-in-splunlk/m-p/447066#M8127</link>
      <description>&lt;P&gt;Do you have enough disk space to accommodate 35 days worth of data?  Do you have volume settings that allow you to consume that disk space for Splunk's use?  Check out your Monitoring Console to be sure.&lt;/P&gt;</description>
      <pubDate>Sat, 15 Sep 2018 01:51:36 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Dev/Data-loss-in-splunlk/m-p/447066#M8127</guid>
      <dc:creator>woodcock</dc:creator>
      <dc:date>2018-09-15T01:51:36Z</dc:date>
    </item>
  </channel>
</rss>

