<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: why is the cluster master not able to fixup buckets (generation tab) &amp;quot;cannot fix up search factor as bucket is not serviceable&amp;quot; in Deployment Architecture</title>
    <link>https://community.splunk.com/t5/Deployment-Architecture/why-is-the-cluster-master-not-able-to-fixup-buckets-generation/m-p/397400#M14395</link>
    <description>&lt;P&gt;when the | delete command is issued in a search, data isn't actually deleted from disk but splunk creates a "deletes" directory and will not return those events in search.&lt;BR /&gt;
ie:&lt;BR /&gt;
on indexer:&lt;BR /&gt;
$SPLUNK_HOME/var/lib/splunk/defaultdb/db/db_1546932848_1546891149_15_4720CDA9-5F9B-4CE1-BB0D-10A6F555A1E4/rawdata/deletes&lt;BR /&gt;
[root@indexer01 deletes]# zcat 38602ccf63e998fa1823f9f664055448.csv.gz &lt;BR /&gt;
timestamp,event_address,type_id,host_id,source_id,sourcetype_id&lt;BR /&gt;
1546932848,1846,0,2,1,1&lt;BR /&gt;
1546932848,1844,0,2,1,1&lt;BR /&gt;
1546932848,1842,0,2,1,1&lt;BR /&gt;
1546932848,1840,0,2,1,1&lt;BR /&gt;
1546932848,1838,0,2,1,1&lt;BR /&gt;
1546932848,1836,0,2,1,1&lt;BR /&gt;
1546932848,1834,0,2,1,1&lt;/P&gt;

&lt;P&gt;first the primary bucket will now have the "deletes" directory&lt;/P&gt;

&lt;P&gt;All peers which hold this bucket need to have the "deletes" directory in sync&lt;/P&gt;

&lt;P&gt;The peer holding the primary bucket will update its checksum and update the cluster master&lt;/P&gt;

&lt;P&gt;subsequently, the peer will initiate a sync request (peer to peer) to update the other peers holding this bucket and this sync happens over port 8089 between peers&lt;/P&gt;

&lt;P&gt;If port 8089 is not open between indexers the sync request will fail between peers and you will have buckets in this state where they are in a fixup loop and never complete the fixup. &lt;/P&gt;

&lt;P&gt;We see this in the CM fixup in the generation tab which shows "cannot fix up search factor as bucket is not serviceable"&lt;/P&gt;

&lt;P&gt;if you see a log msg on the indexer in splunkd.log like the one below , most likely port 8089 (splunk mgmt default port) is not open between indexers and it needs to be:&lt;/P&gt;

&lt;P&gt;01-08-2019 16:15:57.292 -0800 ERROR CMRepJob - job=CMSyncP2PJob bid= my_guid= my_rawport=9887 my_usessl=0 ot_guid= ot_hp=10.10.10.1:8089 ot_rawport=9887 ot_usessl=0 relative_path= custact=p2p_syncup getHttpReply failed; err: Connect Timeout&lt;/P&gt;

&lt;P&gt;Once that port is opened the fixup tasks should complete and get remove from the CM fixup activities&lt;/P&gt;</description>
    <pubDate>Tue, 29 Sep 2020 22:41:57 GMT</pubDate>
    <dc:creator>rphillips_splk</dc:creator>
    <dc:date>2020-09-29T22:41:57Z</dc:date>
    <item>
      <title>why is the cluster master not able to fixup buckets (generation tab) "cannot fix up search factor as bucket is not serviceable"</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/why-is-the-cluster-master-not-able-to-fixup-buckets-generation/m-p/397399#M14394</link>
      <description>&lt;P&gt;&lt;STRONG&gt;Problem:&lt;/STRONG&gt;&lt;BR /&gt;
My cluster master is reporting fixup tasks under the bucket status , &amp;gt; Generation tab  with status "&lt;STRONG&gt;cannot fix up search factor as bucket is not serviceable&lt;/STRONG&gt;", however these buckets are never getting fixed.&lt;/P&gt;</description>
      <pubDate>Fri, 11 Jan 2019 23:11:41 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/why-is-the-cluster-master-not-able-to-fixup-buckets-generation/m-p/397399#M14394</guid>
      <dc:creator>rphillips_splk</dc:creator>
      <dc:date>2019-01-11T23:11:41Z</dc:date>
    </item>
    <item>
      <title>Re: why is the cluster master not able to fixup buckets (generation tab) "cannot fix up search factor as bucket is not serviceable"</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/why-is-the-cluster-master-not-able-to-fixup-buckets-generation/m-p/397400#M14395</link>
      <description>&lt;P&gt;when the | delete command is issued in a search, data isn't actually deleted from disk but splunk creates a "deletes" directory and will not return those events in search.&lt;BR /&gt;
ie:&lt;BR /&gt;
on indexer:&lt;BR /&gt;
$SPLUNK_HOME/var/lib/splunk/defaultdb/db/db_1546932848_1546891149_15_4720CDA9-5F9B-4CE1-BB0D-10A6F555A1E4/rawdata/deletes&lt;BR /&gt;
[root@indexer01 deletes]# zcat 38602ccf63e998fa1823f9f664055448.csv.gz &lt;BR /&gt;
timestamp,event_address,type_id,host_id,source_id,sourcetype_id&lt;BR /&gt;
1546932848,1846,0,2,1,1&lt;BR /&gt;
1546932848,1844,0,2,1,1&lt;BR /&gt;
1546932848,1842,0,2,1,1&lt;BR /&gt;
1546932848,1840,0,2,1,1&lt;BR /&gt;
1546932848,1838,0,2,1,1&lt;BR /&gt;
1546932848,1836,0,2,1,1&lt;BR /&gt;
1546932848,1834,0,2,1,1&lt;/P&gt;

&lt;P&gt;first the primary bucket will now have the "deletes" directory&lt;/P&gt;

&lt;P&gt;All peers which hold this bucket need to have the "deletes" directory in sync&lt;/P&gt;

&lt;P&gt;The peer holding the primary bucket will update its checksum and update the cluster master&lt;/P&gt;

&lt;P&gt;subsequently, the peer will initiate a sync request (peer to peer) to update the other peers holding this bucket and this sync happens over port 8089 between peers&lt;/P&gt;

&lt;P&gt;If port 8089 is not open between indexers the sync request will fail between peers and you will have buckets in this state where they are in a fixup loop and never complete the fixup. &lt;/P&gt;

&lt;P&gt;We see this in the CM fixup in the generation tab which shows "cannot fix up search factor as bucket is not serviceable"&lt;/P&gt;

&lt;P&gt;if you see a log msg on the indexer in splunkd.log like the one below , most likely port 8089 (splunk mgmt default port) is not open between indexers and it needs to be:&lt;/P&gt;

&lt;P&gt;01-08-2019 16:15:57.292 -0800 ERROR CMRepJob - job=CMSyncP2PJob bid= my_guid= my_rawport=9887 my_usessl=0 ot_guid= ot_hp=10.10.10.1:8089 ot_rawport=9887 ot_usessl=0 relative_path= custact=p2p_syncup getHttpReply failed; err: Connect Timeout&lt;/P&gt;

&lt;P&gt;Once that port is opened the fixup tasks should complete and get remove from the CM fixup activities&lt;/P&gt;</description>
      <pubDate>Tue, 29 Sep 2020 22:41:57 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/why-is-the-cluster-master-not-able-to-fixup-buckets-generation/m-p/397400#M14395</guid>
      <dc:creator>rphillips_splk</dc:creator>
      <dc:date>2020-09-29T22:41:57Z</dc:date>
    </item>
    <item>
      <title>Re: why is the cluster master not able to fixup buckets (generation tab) "cannot fix up search factor as bucket is not serviceable"</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/why-is-the-cluster-master-not-able-to-fixup-buckets-generation/m-p/397401#M14396</link>
      <description>&lt;P&gt;I’ve seen this before when frozen buckets were restored to just one of two indexers in their cluster.&lt;/P&gt;

&lt;P&gt;Buckets in the thaweddb path are “not serviceable” because by placing them in thawed you’re telling splunk you don’t want them to be deleted.  Splunk is also not going to replicate thawed buckets because that would be a mess.  So then thawed buckets will also show as unserviceable.&lt;/P&gt;

&lt;P&gt;I mention this because the solution for not serviceable thawed buckets would be different from the solution that worked above.  In case someone comes with very similar issue but different situation.&lt;/P&gt;</description>
      <pubDate>Fri, 11 Jan 2019 23:30:29 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/why-is-the-cluster-master-not-able-to-fixup-buckets-generation/m-p/397401#M14396</guid>
      <dc:creator>jkat54</dc:creator>
      <dc:date>2019-01-11T23:30:29Z</dc:date>
    </item>
    <item>
      <title>Re: why is the cluster master not able to fixup buckets (generation tab) "cannot fix up search factor as bucket is not serviceable"</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/why-is-the-cluster-master-not-able-to-fixup-buckets-generation/m-p/397402#M14397</link>
      <description>&lt;P&gt;Our doc explains management port (default 8089) is the required port opened between cluster peers.  We always needed this port opened. &lt;BR /&gt;
&lt;A href="https://docs.splunk.com/Documentation/Splunk/latest/Indexer/Systemrequirements#Ports_that_the_cluster_nodes_use"&gt;https://docs.splunk.com/Documentation/Splunk/latest/Indexer/Systemrequirements#Ports_that_the_cluster_nodes_use&lt;/A&gt;&lt;/P&gt;

&lt;P&gt;But, who reads doc all the time ?  Wish Splunk checks connectivity of the required ports, and show warning message in Indexer Clustering page. &lt;/P&gt;</description>
      <pubDate>Fri, 11 Jan 2019 23:44:01 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/why-is-the-cluster-master-not-able-to-fixup-buckets-generation/m-p/397402#M14397</guid>
      <dc:creator>Masa</dc:creator>
      <dc:date>2019-01-11T23:44:01Z</dc:date>
    </item>
    <item>
      <title>Re: why is the cluster master not able to fixup buckets (generation tab) "cannot fix up search factor as bucket is not serviceable"</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/why-is-the-cluster-master-not-able-to-fixup-buckets-generation/m-p/397403#M14398</link>
      <description>&lt;P&gt;splunkd.log shows : ERROR CMRepJob - job=CMSyncP2PJob&lt;/P&gt;</description>
      <pubDate>Fri, 11 Jan 2019 23:56:52 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/why-is-the-cluster-master-not-able-to-fixup-buckets-generation/m-p/397403#M14398</guid>
      <dc:creator>rphillips_splk</dc:creator>
      <dc:date>2019-01-11T23:56:52Z</dc:date>
    </item>
    <item>
      <title>Re: why is the cluster master not able to fixup buckets (generation tab) "cannot fix up search factor as bucket is not serviceable"</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/why-is-the-cluster-master-not-able-to-fixup-buckets-generation/m-p/397404#M14399</link>
      <description>&lt;BLOCKQUOTE&gt;
&lt;P&gt;Wish Splunk checks connectivity of the required ports, and show warning message in Indexer Clustering page. &lt;/P&gt;
&lt;/BLOCKQUOTE&gt;

&lt;P&gt;@Masa enhancement SPL-164805 has been filed &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 16 Jan 2019 20:53:08 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/why-is-the-cluster-master-not-able-to-fixup-buckets-generation/m-p/397404#M14399</guid>
      <dc:creator>rphillips_splk</dc:creator>
      <dc:date>2019-01-16T20:53:08Z</dc:date>
    </item>
    <item>
      <title>Re: why is the cluster master not able to fixup buckets (generation tab) "cannot fix up search factor as bucket is not serviceable"</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/why-is-the-cluster-master-not-able-to-fixup-buckets-generation/m-p/397405#M14400</link>
      <description>&lt;P&gt;you're awesome, @rphillips_splunk &lt;/P&gt;</description>
      <pubDate>Thu, 17 Jan 2019 19:27:45 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/why-is-the-cluster-master-not-able-to-fixup-buckets-generation/m-p/397405#M14400</guid>
      <dc:creator>Masa</dc:creator>
      <dc:date>2019-01-17T19:27:45Z</dc:date>
    </item>
  </channel>
</rss>

