<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Proactively monitor for bucket corruption in Deployment Architecture</title>
    <link>https://community.splunk.com/t5/Deployment-Architecture/Proactively-monitor-for-bucket-corruption/m-p/345803#M12851</link>
    <description>&lt;P&gt;Hi&lt;/P&gt;

&lt;P&gt;we can found corrupted buckets from multisite cluster by next search / alert: &lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;index=_internal component=CMMaster state=Discard incoming_bucket_size=* earliest=-30d@d 
| dedup bid 
| table _time,bid,peer_name,existing_bucket_size,incoming_bucket_size
| sort bid,_time
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;This shows bucket id + source peer.&lt;/P&gt;

&lt;P&gt;r. Ismo&lt;/P&gt;</description>
    <pubDate>Wed, 25 Apr 2018 06:26:31 GMT</pubDate>
    <dc:creator>isoutamo</dc:creator>
    <dc:date>2018-04-25T06:26:31Z</dc:date>
    <item>
      <title>Proactively monitor for bucket corruption</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/Proactively-monitor-for-bucket-corruption/m-p/345797#M12845</link>
      <description>&lt;P&gt;I just repaired corrupt buckets for a partner index on one of our enterprise indexers.&lt;BR /&gt;
The issue only became apparent after the customer saw the warnings on their reports.&lt;/P&gt;

&lt;P&gt;My question is: are there easy proactive warnings the administrators can receive highlighting index bucket corruption -- rather than leaving it up to our customers to find the problems.&lt;/P&gt;</description>
      <pubDate>Wed, 02 Aug 2017 16:00:46 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/Proactively-monitor-for-bucket-corruption/m-p/345797#M12845</guid>
      <dc:creator>jamesoconnell</dc:creator>
      <dc:date>2017-08-02T16:00:46Z</dc:date>
    </item>
    <item>
      <title>Re: Proactively monitor for bucket corruption</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/Proactively-monitor-for-bucket-corruption/m-p/345798#M12846</link>
      <description>&lt;P&gt;If you are using "monitoring console" that would be a good starting point. It has the visibility into monitoring Indexer clustering activities.  Below link might get you started, these are all the dashboards/searches, so may be  you can setup the alerts on them. Also on the cluster master settings-&amp;gt;indexer clustering might give you some insights too.&lt;BR /&gt;
&lt;A href="https://docs.splunk.com/Documentation/Splunk/6.6.2/Indexer/Viewindexerclusteringstatus"&gt;https://docs.splunk.com/Documentation/Splunk/6.6.2/Indexer/Viewindexerclusteringstatus&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 02 Aug 2017 17:09:55 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/Proactively-monitor-for-bucket-corruption/m-p/345798#M12846</guid>
      <dc:creator>bheemireddi</dc:creator>
      <dc:date>2017-08-02T17:09:55Z</dc:date>
    </item>
    <item>
      <title>Re: Proactively monitor for bucket corruption</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/Proactively-monitor-for-bucket-corruption/m-p/345799#M12847</link>
      <description>&lt;P&gt;Would you provide more detail on how you identified the buckets were corrupted? That might add color into an existing way to be notified.&lt;/P&gt;</description>
      <pubDate>Wed, 02 Aug 2017 18:52:50 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/Proactively-monitor-for-bucket-corruption/m-p/345799#M12847</guid>
      <dc:creator>sloshburch</dc:creator>
      <dc:date>2017-08-02T18:52:50Z</dc:date>
    </item>
    <item>
      <title>Re: Proactively monitor for bucket corruption</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/Proactively-monitor-for-bucket-corruption/m-p/345800#M12848</link>
      <description>&lt;P&gt;There was an exclamation symbol / warning on the Dashboard with some cryptic message saying there was an error related to the indexer in question:  "[indexer_] Streamed search execute failed because: JournalSliceDirectory: Cannot seek to rawdata offset 0 ..."&lt;BR /&gt;
This type of error scares the crap out of users and they freak-out to the admin...&lt;/P&gt;</description>
      <pubDate>Wed, 02 Aug 2017 21:37:46 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/Proactively-monitor-for-bucket-corruption/m-p/345800#M12848</guid>
      <dc:creator>jamesoconnell</dc:creator>
      <dc:date>2017-08-02T21:37:46Z</dc:date>
    </item>
    <item>
      <title>Re: Proactively monitor for bucket corruption</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/Proactively-monitor-for-bucket-corruption/m-p/345801#M12849</link>
      <description>&lt;P&gt;A peer of mine shared this search. Does it jive with your environment? I wanna see if we can add these things into the MC as well so I'm curious to hear how you make out.&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;index=_internal sourcetype=splunkd component=ProcessTracker (BucketBuilder OR JournalSlice) (NOT "rawdata was truncated")
|eval message=replace(message, "^\(child.*?\)\s+", "")
|bin _time span=1m
|stats c by _time, host, splunk_server, message
|fields - c
|rename splunk_server as Indexer, host as Host, message as Issue
&lt;/CODE&gt;&lt;/PRE&gt;</description>
      <pubDate>Fri, 11 Aug 2017 16:21:11 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/Proactively-monitor-for-bucket-corruption/m-p/345801#M12849</guid>
      <dc:creator>sloshburch</dc:creator>
      <dc:date>2017-08-11T16:21:11Z</dc:date>
    </item>
    <item>
      <title>Re: Proactively monitor for bucket corruption</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/Proactively-monitor-for-bucket-corruption/m-p/345802#M12850</link>
      <description>&lt;P&gt;Thank you Mr. Burch.  I tried running this but didn't get any results.&lt;/P&gt;

&lt;P&gt;This could either mean that we don't have any bucket issues, or your search isn't worth the paper it is written on -- not sure which.  &lt;/P&gt;

&lt;P&gt;I'm not sure where the truth lies yet, but I am guessing we must have some bucket issues somewhere given the amount of data we pump each day.&lt;/P&gt;

&lt;P&gt;More testing required I think.&lt;/P&gt;

&lt;P&gt;thank you!&lt;/P&gt;</description>
      <pubDate>Fri, 11 Aug 2017 22:50:47 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/Proactively-monitor-for-bucket-corruption/m-p/345802#M12850</guid>
      <dc:creator>jamesoconnell</dc:creator>
      <dc:date>2017-08-11T22:50:47Z</dc:date>
    </item>
    <item>
      <title>Re: Proactively monitor for bucket corruption</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/Proactively-monitor-for-bucket-corruption/m-p/345803#M12851</link>
      <description>&lt;P&gt;Hi&lt;/P&gt;

&lt;P&gt;we can found corrupted buckets from multisite cluster by next search / alert: &lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;index=_internal component=CMMaster state=Discard incoming_bucket_size=* earliest=-30d@d 
| dedup bid 
| table _time,bid,peer_name,existing_bucket_size,incoming_bucket_size
| sort bid,_time
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;This shows bucket id + source peer.&lt;/P&gt;

&lt;P&gt;r. Ismo&lt;/P&gt;</description>
      <pubDate>Wed, 25 Apr 2018 06:26:31 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/Proactively-monitor-for-bucket-corruption/m-p/345803#M12851</guid>
      <dc:creator>isoutamo</dc:creator>
      <dc:date>2018-04-25T06:26:31Z</dc:date>
    </item>
    <item>
      <title>Re: Proactively monitor for bucket corruption</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/Proactively-monitor-for-bucket-corruption/m-p/345804#M12852</link>
      <description>&lt;P&gt;Us neither could see any issues with previous search, but there are still couple of corrupted buckets (e.g. journal.gz was only couple of bytes).&lt;/P&gt;</description>
      <pubDate>Thu, 26 Apr 2018 07:31:43 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/Proactively-monitor-for-bucket-corruption/m-p/345804#M12852</guid>
      <dc:creator>isoutamo</dc:creator>
      <dc:date>2018-04-26T07:31:43Z</dc:date>
    </item>
    <item>
      <title>Re: Proactively monitor for bucket corruption</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/Proactively-monitor-for-bucket-corruption/m-p/522489#M17998</link>
      <description>&lt;P&gt;Even this is old case, I would like to add which the one can do with current versions.&lt;/P&gt;&lt;P&gt;Just run this:&lt;/P&gt;&lt;LI-CODE lang="java"&gt;| dbinspect index=* OR index=_* corruptonly=true 
| search state!=hot&lt;/LI-CODE&gt;&lt;P&gt;Select enough long time period to found all corrupted buckets.&lt;/P&gt;&lt;P&gt;r. Ismo&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 01 Oct 2020 13:32:41 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/Proactively-monitor-for-bucket-corruption/m-p/522489#M17998</guid>
      <dc:creator>isoutamo</dc:creator>
      <dc:date>2020-10-01T13:32:41Z</dc:date>
    </item>
  </channel>
</rss>

