<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Search Head Cluster: Failed HMAC signature match errors in Splunk Enterprise</title>
    <link>https://community.splunk.com/t5/Splunk-Enterprise/Search-Head-Cluster-Failed-HMAC-signature-match-errors/m-p/629310#M15288</link>
    <description>&lt;P&gt;Hi, I am running a Search Head Cluster with 7 search heads on Splunk 8.2.9.&lt;/P&gt;&lt;P&gt;2 of the search heads are generating the following error messages at ~5 second intervals for a period of time before stopping:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;ERROR DigestProcessor [38271 TcpChannelThread] - Failed signature match
ERROR HTTPAuthManager [38271 TcpChannelThread] - Failed to verify HMAC signature, uri: /services/shcluster/member/consensus/pseudoid/raft_request_vote?output_mode=json&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The search head cluster is otherwise running as expected as far as I can tell.&lt;/P&gt;&lt;P&gt;The search heads that are producing these errors are the only 2 that have been elected as captain in the last 30 days from examining the logs. There are no preferred captain or similar configurations set.&lt;/P&gt;&lt;P&gt;I have checked the [shclustering] pass4SymmKey values on each search head. They are all configured to the same value although use different Splunk Secrets to encrypt.&lt;/P&gt;&lt;P&gt;I am not sure when the errors first started appearing so can't link this to a specific upgrade on configuration change unfortunately.&lt;/P&gt;&lt;P&gt;The thread_id values seem to stay around for between 10-30 minutes. Sometimes 2 thread_ids will be active at once, sometimes none are active for a period. When looking at other logs for a particular thread_id around the same time period (at info logging level) I can't find see anything that adds any more cluses to what is causing the errors.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Thu, 02 Feb 2023 10:32:05 GMT</pubDate>
    <dc:creator>a_kearney</dc:creator>
    <dc:date>2023-02-02T10:32:05Z</dc:date>
    <item>
      <title>Search Head Cluster: Failed HMAC signature match errors</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/Search-Head-Cluster-Failed-HMAC-signature-match-errors/m-p/629310#M15288</link>
      <description>&lt;P&gt;Hi, I am running a Search Head Cluster with 7 search heads on Splunk 8.2.9.&lt;/P&gt;&lt;P&gt;2 of the search heads are generating the following error messages at ~5 second intervals for a period of time before stopping:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;ERROR DigestProcessor [38271 TcpChannelThread] - Failed signature match
ERROR HTTPAuthManager [38271 TcpChannelThread] - Failed to verify HMAC signature, uri: /services/shcluster/member/consensus/pseudoid/raft_request_vote?output_mode=json&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The search head cluster is otherwise running as expected as far as I can tell.&lt;/P&gt;&lt;P&gt;The search heads that are producing these errors are the only 2 that have been elected as captain in the last 30 days from examining the logs. There are no preferred captain or similar configurations set.&lt;/P&gt;&lt;P&gt;I have checked the [shclustering] pass4SymmKey values on each search head. They are all configured to the same value although use different Splunk Secrets to encrypt.&lt;/P&gt;&lt;P&gt;I am not sure when the errors first started appearing so can't link this to a specific upgrade on configuration change unfortunately.&lt;/P&gt;&lt;P&gt;The thread_id values seem to stay around for between 10-30 minutes. Sometimes 2 thread_ids will be active at once, sometimes none are active for a period. When looking at other logs for a particular thread_id around the same time period (at info logging level) I can't find see anything that adds any more cluses to what is causing the errors.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 02 Feb 2023 10:32:05 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/Search-Head-Cluster-Failed-HMAC-signature-match-errors/m-p/629310#M15288</guid>
      <dc:creator>a_kearney</dc:creator>
      <dc:date>2023-02-02T10:32:05Z</dc:date>
    </item>
  </channel>
</rss>

