<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Why is Search Head Cluster silently out of sync (version 8.2.3)? in Splunk Enterprise</title>
    <link>https://community.splunk.com/t5/Splunk-Enterprise/Why-is-Search-Head-Cluster-silently-out-of-sync-version-8-2-3/m-p/646460#M16548</link>
    <description>&lt;P&gt;I'm a Splunk PS admin working at a client site and I wanted to post a challenge and resolution that we encountered.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Problem:&lt;BR /&gt;&lt;/STRONG&gt;Client reported missing knowledge objects in a custom app private area; they expected ~40 reports but only had ~17. The client last used the reports 7 days prior. Asked Splunk PS to investigate.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Environment:&lt;BR /&gt;&lt;/STRONG&gt;3 instance SHC&lt;BR /&gt;Version 8.2.3, Linux&lt;BR /&gt;&amp;gt;15 Indexers&lt;BR /&gt;&amp;gt;50 users across the platform&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Troubleshooting Approach:&lt;BR /&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Verified that the given Knowledge Objects (KO's) had not been deleted. Simple SPL search in &lt;FONT face="courier new,courier"&gt;index="_audit"&lt;/FONT&gt; for the app and verified last 10 days of logs. No suggestion or evidence of deletion.&lt;/LI&gt;
&lt;LI&gt;Via CLI set path to the given custom app, listed out objects in savedsearches.conf, count was 17&lt;STRONG&gt;&lt;BR /&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT face="courier new,courier"&gt;cat savedsearches.conf | grep "\[" -P | wc&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT face="arial,helvetica,sans-serif"&gt;Changed SH to alternative member, repeated commands, count was 44. Verified the 3rd member also where the count was 44.&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT face="arial,helvetica,sans-serif"&gt;Conclusion, the member with 17 savedsearches was clearly out of sync and did not have all recent KO's.&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT face="courier new,courier"&gt;&lt;FONT face="arial,helvetica,sans-serif"&gt;Checked the Captaincy&lt;/FONT&gt; ./splunk show shclusters-status --verbose &lt;FONT face="arial,helvetica,sans-serif"&gt;all appeared correct.&lt;/FONT&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT face="courier new,courier"&gt;&lt;FONT face="arial,helvetica,sans-serif"&gt;The member with limited objects was the current captain, &lt;FONT face="courier new,courier"&gt;out_of_sync_node : 0&lt;/FONT&gt; on all three instances in the cluster.&lt;/FONT&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;FONT face="arial,helvetica,sans-serif"&gt;Remediation:&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT face="arial,helvetica,sans-serif"&gt;Verified the Monitoring Console, no alerts listed, health check issues or evidence of errors.&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT face="arial,helvetica,sans-serif"&gt;Created a backup of this users savedsearches.conf (on one instance)&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT face="courier new,courier"&gt;cp&amp;nbsp;savedsearches.conf&amp;nbsp;savedsearches.bak&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT face="arial,helvetica,sans-serif"&gt;Following the Splunk Docs&amp;nbsp;&lt;A title="Splunk SHC: perform a manual resync" href="https://docs.splunk.com/Documentation/Splunk/8.2.3/DistSearch/HowconfrepoworksinSHC#Perform_a_manual_resync" target="_blank" rel="noopener"&gt;SHC: perform a manual resync&lt;/A&gt;&amp;nbsp;we moved the captain to an instance with the correct number of KO's&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT face="courier new,courier"&gt;./splunk transfer shcluster-captain -mgmt_uri https://&amp;lt;server&amp;gt;:8089&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT face="arial,helvetica,sans-serif"&gt;Carefully issued the destructive command onto the out-of-sync instance:&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT face="courier new,courier"&gt;./splunk resync shcluster-replicated-config&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT face="arial,helvetica,sans-serif"&gt;Repeated this for the second SHC member&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT face="arial,helvetica,sans-serif"&gt;Repeated checks all three members now in-sync&lt;/FONT&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;FONT face="arial,helvetica,sans-serif"&gt;Post works:&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT face="arial,helvetica,sans-serif"&gt;We were unable to locate a release notes item that suggests this is a bug.&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT face="arial,helvetica,sans-serif"&gt;There had previously been a period of downtime for the out-of-sync member, its Splunk daemon had stopped following a push from the Deployer.&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT face="arial,helvetica,sans-serif"&gt;Still no alerts in the MC, nor logs per the docs to indicate e.g.&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;EM&gt;&lt;FONT face="courier new,courier"&gt;Error pulling configurations from the search head cluster captain; consider performing a destructive configuration resync on this search head cluster member.&lt;/FONT&gt;&lt;/EM&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;FONT face="arial,helvetica,sans-serif"&gt;Conclusions:&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT face="arial,helvetica,sans-serif"&gt;The cluster was silently Out-of-Sync&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT face="arial,helvetica,sans-serif"&gt;Many KO's across multiple apps would have been affected&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT face="arial,helvetica,sans-serif"&gt;Follow the Splunk Docs&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT face="arial,helvetica,sans-serif"&gt;Recommend&amp;nbsp;to client to upgrade to latest version 9.x.&amp;nbsp;&lt;/FONT&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
    <pubDate>Fri, 09 Jun 2023 20:59:38 GMT</pubDate>
    <dc:creator>NullZero</dc:creator>
    <dc:date>2023-06-09T20:59:38Z</dc:date>
    <item>
      <title>Why is Search Head Cluster silently out of sync (version 8.2.3)?</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/Why-is-Search-Head-Cluster-silently-out-of-sync-version-8-2-3/m-p/646460#M16548</link>
      <description>&lt;P&gt;I'm a Splunk PS admin working at a client site and I wanted to post a challenge and resolution that we encountered.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Problem:&lt;BR /&gt;&lt;/STRONG&gt;Client reported missing knowledge objects in a custom app private area; they expected ~40 reports but only had ~17. The client last used the reports 7 days prior. Asked Splunk PS to investigate.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Environment:&lt;BR /&gt;&lt;/STRONG&gt;3 instance SHC&lt;BR /&gt;Version 8.2.3, Linux&lt;BR /&gt;&amp;gt;15 Indexers&lt;BR /&gt;&amp;gt;50 users across the platform&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Troubleshooting Approach:&lt;BR /&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Verified that the given Knowledge Objects (KO's) had not been deleted. Simple SPL search in &lt;FONT face="courier new,courier"&gt;index="_audit"&lt;/FONT&gt; for the app and verified last 10 days of logs. No suggestion or evidence of deletion.&lt;/LI&gt;
&lt;LI&gt;Via CLI set path to the given custom app, listed out objects in savedsearches.conf, count was 17&lt;STRONG&gt;&lt;BR /&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT face="courier new,courier"&gt;cat savedsearches.conf | grep "\[" -P | wc&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT face="arial,helvetica,sans-serif"&gt;Changed SH to alternative member, repeated commands, count was 44. Verified the 3rd member also where the count was 44.&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT face="arial,helvetica,sans-serif"&gt;Conclusion, the member with 17 savedsearches was clearly out of sync and did not have all recent KO's.&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT face="courier new,courier"&gt;&lt;FONT face="arial,helvetica,sans-serif"&gt;Checked the Captaincy&lt;/FONT&gt; ./splunk show shclusters-status --verbose &lt;FONT face="arial,helvetica,sans-serif"&gt;all appeared correct.&lt;/FONT&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT face="courier new,courier"&gt;&lt;FONT face="arial,helvetica,sans-serif"&gt;The member with limited objects was the current captain, &lt;FONT face="courier new,courier"&gt;out_of_sync_node : 0&lt;/FONT&gt; on all three instances in the cluster.&lt;/FONT&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;FONT face="arial,helvetica,sans-serif"&gt;Remediation:&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT face="arial,helvetica,sans-serif"&gt;Verified the Monitoring Console, no alerts listed, health check issues or evidence of errors.&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT face="arial,helvetica,sans-serif"&gt;Created a backup of this users savedsearches.conf (on one instance)&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT face="courier new,courier"&gt;cp&amp;nbsp;savedsearches.conf&amp;nbsp;savedsearches.bak&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT face="arial,helvetica,sans-serif"&gt;Following the Splunk Docs&amp;nbsp;&lt;A title="Splunk SHC: perform a manual resync" href="https://docs.splunk.com/Documentation/Splunk/8.2.3/DistSearch/HowconfrepoworksinSHC#Perform_a_manual_resync" target="_blank" rel="noopener"&gt;SHC: perform a manual resync&lt;/A&gt;&amp;nbsp;we moved the captain to an instance with the correct number of KO's&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT face="courier new,courier"&gt;./splunk transfer shcluster-captain -mgmt_uri https://&amp;lt;server&amp;gt;:8089&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT face="arial,helvetica,sans-serif"&gt;Carefully issued the destructive command onto the out-of-sync instance:&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT face="courier new,courier"&gt;./splunk resync shcluster-replicated-config&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT face="arial,helvetica,sans-serif"&gt;Repeated this for the second SHC member&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT face="arial,helvetica,sans-serif"&gt;Repeated checks all three members now in-sync&lt;/FONT&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;FONT face="arial,helvetica,sans-serif"&gt;Post works:&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT face="arial,helvetica,sans-serif"&gt;We were unable to locate a release notes item that suggests this is a bug.&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT face="arial,helvetica,sans-serif"&gt;There had previously been a period of downtime for the out-of-sync member, its Splunk daemon had stopped following a push from the Deployer.&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT face="arial,helvetica,sans-serif"&gt;Still no alerts in the MC, nor logs per the docs to indicate e.g.&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;EM&gt;&lt;FONT face="courier new,courier"&gt;Error pulling configurations from the search head cluster captain; consider performing a destructive configuration resync on this search head cluster member.&lt;/FONT&gt;&lt;/EM&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;FONT face="arial,helvetica,sans-serif"&gt;Conclusions:&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT face="arial,helvetica,sans-serif"&gt;The cluster was silently Out-of-Sync&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT face="arial,helvetica,sans-serif"&gt;Many KO's across multiple apps would have been affected&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT face="arial,helvetica,sans-serif"&gt;Follow the Splunk Docs&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT face="arial,helvetica,sans-serif"&gt;Recommend&amp;nbsp;to client to upgrade to latest version 9.x.&amp;nbsp;&lt;/FONT&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Fri, 09 Jun 2023 20:59:38 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/Why-is-Search-Head-Cluster-silently-out-of-sync-version-8-2-3/m-p/646460#M16548</guid>
      <dc:creator>NullZero</dc:creator>
      <dc:date>2023-06-09T20:59:38Z</dc:date>
    </item>
    <item>
      <title>Re: Why is Search Head Cluster silently out of sync (version 8.2.3)?</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/Why-is-Search-Head-Cluster-silently-out-of-sync-version-8-2-3/m-p/669575#M17911</link>
      <description>&lt;P&gt;Saved my day&lt;span class="lia-unicode-emoji" title=":beaming_face_with_smiling_eyes:"&gt;😁&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 23 Nov 2023 13:23:04 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/Why-is-Search-Head-Cluster-silently-out-of-sync-version-8-2-3/m-p/669575#M17911</guid>
      <dc:creator>splunkoptimus</dc:creator>
      <dc:date>2023-11-23T13:23:04Z</dc:date>
    </item>
    <item>
      <title>Re: Why is Search Head Cluster silently out of sync (version 8.2.3)?</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/Why-is-Search-Head-Cluster-silently-out-of-sync-version-8-2-3/m-p/672356#M18173</link>
      <description>&lt;P&gt;That's an awesome explanation&amp;nbsp;&lt;a href="https://community.splunk.com/t5/user/viewprofilepage/user-id/224138"&gt;@NullZero&lt;/a&gt;.... We are facing similar issues, but sort of different way...&lt;BR /&gt;&lt;BR /&gt;We have 2 node Search Head Cluster... among which one is static captain... another one is a member.&lt;BR /&gt;Often the non-captain member goes out of cluster (It is not showing in the Search head clustering page).. every time we are manually restarting the Splunk or the entire EC2 of the member.. then it is showing in the cluster page....&lt;/P&gt;&lt;P&gt;Can i use the re-sync command to solve the issue, instead of restarting the Splunk or EC2? will it help?&lt;/P&gt;&lt;P&gt;Thanks for your help&amp;nbsp;&lt;span class="lia-unicode-emoji" title=":smiling_face_with_smiling_eyes:"&gt;😊&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 20 Dec 2023 03:31:04 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/Why-is-Search-Head-Cluster-silently-out-of-sync-version-8-2-3/m-p/672356#M18173</guid>
      <dc:creator>murugansplunkin</dc:creator>
      <dc:date>2023-12-20T03:31:04Z</dc:date>
    </item>
    <item>
      <title>Re: Why is Search Head Cluster silently out of sync (version 8.2.3)?</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/Why-is-Search-Head-Cluster-silently-out-of-sync-version-8-2-3/m-p/673731#M18307</link>
      <description>&lt;P&gt;Check the logs for connectivity issues.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 10 Jan 2024 06:28:36 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/Why-is-Search-Head-Cluster-silently-out-of-sync-version-8-2-3/m-p/673731#M18307</guid>
      <dc:creator>splunkoptimus</dc:creator>
      <dc:date>2024-01-10T06:28:36Z</dc:date>
    </item>
  </channel>
</rss>

