<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: KVStore failure after upgrade to 9.0 in Splunk Enterprise</title>
    <link>https://community.splunk.com/t5/Splunk-Enterprise/Why-is-there-KVStore-failure-after-upgrade-to-9-0/m-p/603813#M13076</link>
    <description>&lt;P&gt;Here are the following tails from the log files:&lt;/P&gt;&lt;P&gt;root@splunk:/opt/splunk/var/log/splunk# tail mongod.log&lt;BR /&gt;2022-06-28T19:40:57.749Z F - [main] Fatal Assertion 28652 at src/mongo/util/net/ssl_manager_openssl.cpp 987&lt;BR /&gt;2022-06-28T19:40:57.749Z F - [main] \n\n***aborting after fassert() failure\n\n&lt;BR /&gt;2022-06-28T20:26:48.752Z W CONTROL [main] net.ssl.sslCipherConfig is deprecated. It will be removed in a future release.&lt;BR /&gt;2022-06-28T20:26:48.763Z F NETWORK [main] The provided SSL certificate is expired or not yet valid.&lt;BR /&gt;2022-06-28T20:26:48.763Z F - [main] Fatal Assertion 28652 at src/mongo/util/net/ssl_manager_openssl.cpp 987&lt;BR /&gt;2022-06-28T20:26:48.763Z F - [main] \n\n***aborting after fassert() failure\n\n&lt;BR /&gt;2022-06-29T15:04:51.137Z W CONTROL [main] net.ssl.sslCipherConfig is deprecated. It will be removed in a future release.&lt;BR /&gt;2022-06-29T15:04:51.149Z F NETWORK [main] The provided SSL certificate is expired or not yet valid.&lt;BR /&gt;2022-06-29T15:04:51.149Z F - [main] Fatal Assertion 28652 at src/mongo/util/net/ssl_manager_openssl.cpp 987&lt;BR /&gt;2022-06-29T15:04:51.149Z F - [main] \n\n***aborting after fassert() failure\n\n&lt;BR /&gt;root@splunk:/opt/splunk/var/log/splunk# tail splunkd.log&lt;BR /&gt;06-29-2022 11:11:14.993 -0500 WARN AuthorizationManager [44284 SavedSearchFetcher] - Unknown role 'winfra-admin'&lt;BR /&gt;06-29-2022 11:11:14.994 -0500 WARN AuthorizationManager [44284 SavedSearchFetcher] - Unknown role 'winfra-admin'&lt;BR /&gt;06-29-2022 11:11:14.994 -0500 WARN AuthorizationManager [44284 SavedSearchFetcher] - Unknown role 'winfra-admin'&lt;BR /&gt;06-29-2022 11:11:18.206 -0500 WARN HttpListener [44522 webui] - Socket error from 161.31.28.185:52910 while idling: error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown protocol&lt;BR /&gt;06-29-2022 11:11:18.214 -0500 WARN HttpListener [44522 webui] - Socket error from 161.31.28.185:52911 while idling: error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown protocol&lt;BR /&gt;06-29-2022 11:11:18.219 -0500 WARN HttpListener [44522 webui] - Socket error from 161.31.28.185:52912 while idling: error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown protocol&lt;BR /&gt;06-29-2022 11:11:24.098 -0500 INFO ExecProcessor [44189 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/splunk_assist/bin/instance_id_modular_input.py" [assist::instance_id_modular_input.py:228] [get_server_roles] [65977] Fetched server roles, roles=['indexer', 'license_master', 'license_manager']&lt;BR /&gt;06-29-2022 11:11:24.107 -0500 INFO ExecProcessor [44189 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/splunk_assist/bin/instance_id_modular_input.py" [assist::instance_id_modular_input.py:256] [get_cluster_mode] [65977] Fetched cluster mode, mode=disabled&lt;BR /&gt;06-29-2022 11:11:24.107 -0500 INFO ExecProcessor [44189 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/splunk_assist/bin/instance_id_modular_input.py" [assist::instance_id_modular_input.py:30] [should_run] [65977] should run test, sh=False&lt;BR /&gt;06-29-2022 11:11:26.891 -0500 INFO TailReader [44264 tailreader0] - Batch input finished reading file='/opt/splunk/var/spool/splunk/tracker.log'&lt;/P&gt;</description>
    <pubDate>Wed, 29 Jun 2022 16:13:16 GMT</pubDate>
    <dc:creator>bigfatyeastroll</dc:creator>
    <dc:date>2022-06-29T16:13:16Z</dc:date>
    <item>
      <title>Why is there KVStore failure after upgrade to 9.0?</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/Why-is-there-KVStore-failure-after-upgrade-to-9-0/m-p/603808#M13073</link>
      <description>&lt;P&gt;After upgrading to Splunk 9.0 on a single instance, we occasionally get KV Store errors.&amp;nbsp;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Splunk KV Store Errors.png" style="width: 341px;"&gt;&lt;img src="https://community.splunk.com/t5/image/serverpage/image-id/20346iADEEEF7BD519C8ED/image-size/large?v=v2&amp;amp;px=999" role="button" title="Splunk KV Store Errors.png" alt="Splunk KV Store Errors.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;CLI status shows:&lt;/P&gt;
&lt;P&gt;This member:&lt;BR /&gt;backupRestoreStatus : Ready&lt;BR /&gt;disabled : 0&lt;BR /&gt;featureCompatibilityVersion : An error occurred during the last operation ('getParameter', domain: '15', code: '13053'): No suitable servers found: `serverSelectionTimeoutMS` expired: [connection closed calling ismaster on '127.0.0.1:8191']&lt;BR /&gt;guid : E8254C08-B854-426C-B66D-7072D625D0F6&lt;BR /&gt;port : 8191&lt;BR /&gt;standalone : 1&lt;BR /&gt;status : failed&lt;BR /&gt;storageEngine : mmapv1&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I've looked on Splunk.com and googled but haven't found anything on single instances beyond reinstalling 9.0 which I've done twice.&lt;/P&gt;</description>
      <pubDate>Wed, 29 Jun 2022 23:07:07 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/Why-is-there-KVStore-failure-after-upgrade-to-9-0/m-p/603808#M13073</guid>
      <dc:creator>bigfatyeastroll</dc:creator>
      <dc:date>2022-06-29T23:07:07Z</dc:date>
    </item>
    <item>
      <title>Re: KVStore failure after upgrade to 9.0</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/Why-is-there-KVStore-failure-after-upgrade-to-9-0/m-p/603809#M13074</link>
      <description>&lt;P&gt;Hi&lt;/P&gt;&lt;P&gt;what you found from splunkd.log and mongod.log?&lt;BR /&gt;Have you manually migrated mongo version?&lt;/P&gt;&lt;P&gt;r. Ismo&lt;/P&gt;</description>
      <pubDate>Wed, 29 Jun 2022 16:03:43 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/Why-is-there-KVStore-failure-after-upgrade-to-9-0/m-p/603809#M13074</guid>
      <dc:creator>isoutamo</dc:creator>
      <dc:date>2022-06-29T16:03:43Z</dc:date>
    </item>
    <item>
      <title>Re: KVStore failure after upgrade to 9.0</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/Why-is-there-KVStore-failure-after-upgrade-to-9-0/m-p/603813#M13076</link>
      <description>&lt;P&gt;Here are the following tails from the log files:&lt;/P&gt;&lt;P&gt;root@splunk:/opt/splunk/var/log/splunk# tail mongod.log&lt;BR /&gt;2022-06-28T19:40:57.749Z F - [main] Fatal Assertion 28652 at src/mongo/util/net/ssl_manager_openssl.cpp 987&lt;BR /&gt;2022-06-28T19:40:57.749Z F - [main] \n\n***aborting after fassert() failure\n\n&lt;BR /&gt;2022-06-28T20:26:48.752Z W CONTROL [main] net.ssl.sslCipherConfig is deprecated. It will be removed in a future release.&lt;BR /&gt;2022-06-28T20:26:48.763Z F NETWORK [main] The provided SSL certificate is expired or not yet valid.&lt;BR /&gt;2022-06-28T20:26:48.763Z F - [main] Fatal Assertion 28652 at src/mongo/util/net/ssl_manager_openssl.cpp 987&lt;BR /&gt;2022-06-28T20:26:48.763Z F - [main] \n\n***aborting after fassert() failure\n\n&lt;BR /&gt;2022-06-29T15:04:51.137Z W CONTROL [main] net.ssl.sslCipherConfig is deprecated. It will be removed in a future release.&lt;BR /&gt;2022-06-29T15:04:51.149Z F NETWORK [main] The provided SSL certificate is expired or not yet valid.&lt;BR /&gt;2022-06-29T15:04:51.149Z F - [main] Fatal Assertion 28652 at src/mongo/util/net/ssl_manager_openssl.cpp 987&lt;BR /&gt;2022-06-29T15:04:51.149Z F - [main] \n\n***aborting after fassert() failure\n\n&lt;BR /&gt;root@splunk:/opt/splunk/var/log/splunk# tail splunkd.log&lt;BR /&gt;06-29-2022 11:11:14.993 -0500 WARN AuthorizationManager [44284 SavedSearchFetcher] - Unknown role 'winfra-admin'&lt;BR /&gt;06-29-2022 11:11:14.994 -0500 WARN AuthorizationManager [44284 SavedSearchFetcher] - Unknown role 'winfra-admin'&lt;BR /&gt;06-29-2022 11:11:14.994 -0500 WARN AuthorizationManager [44284 SavedSearchFetcher] - Unknown role 'winfra-admin'&lt;BR /&gt;06-29-2022 11:11:18.206 -0500 WARN HttpListener [44522 webui] - Socket error from 161.31.28.185:52910 while idling: error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown protocol&lt;BR /&gt;06-29-2022 11:11:18.214 -0500 WARN HttpListener [44522 webui] - Socket error from 161.31.28.185:52911 while idling: error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown protocol&lt;BR /&gt;06-29-2022 11:11:18.219 -0500 WARN HttpListener [44522 webui] - Socket error from 161.31.28.185:52912 while idling: error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown protocol&lt;BR /&gt;06-29-2022 11:11:24.098 -0500 INFO ExecProcessor [44189 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/splunk_assist/bin/instance_id_modular_input.py" [assist::instance_id_modular_input.py:228] [get_server_roles] [65977] Fetched server roles, roles=['indexer', 'license_master', 'license_manager']&lt;BR /&gt;06-29-2022 11:11:24.107 -0500 INFO ExecProcessor [44189 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/splunk_assist/bin/instance_id_modular_input.py" [assist::instance_id_modular_input.py:256] [get_cluster_mode] [65977] Fetched cluster mode, mode=disabled&lt;BR /&gt;06-29-2022 11:11:24.107 -0500 INFO ExecProcessor [44189 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/splunk_assist/bin/instance_id_modular_input.py" [assist::instance_id_modular_input.py:30] [should_run] [65977] should run test, sh=False&lt;BR /&gt;06-29-2022 11:11:26.891 -0500 INFO TailReader [44264 tailreader0] - Batch input finished reading file='/opt/splunk/var/spool/splunk/tracker.log'&lt;/P&gt;</description>
      <pubDate>Wed, 29 Jun 2022 16:13:16 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/Why-is-there-KVStore-failure-after-upgrade-to-9-0/m-p/603813#M13076</guid>
      <dc:creator>bigfatyeastroll</dc:creator>
      <dc:date>2022-06-29T16:13:16Z</dc:date>
    </item>
    <item>
      <title>Re: KVStore failure after upgrade to 9.0</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/Why-is-there-KVStore-failure-after-upgrade-to-9-0/m-p/629648#M15304</link>
      <description>&lt;P&gt;You logs indicate a old pem cert. have you tried renaming the server.pem file under splunk/etc/auth. then restarting splunk service. most KV store isues are resolved with this action.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;2022-06-29T15:04:51.149Z F NETWORK [main] The provided SSL certificate is expired or not yet valid.&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Sun, 05 Feb 2023 14:04:40 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/Why-is-there-KVStore-failure-after-upgrade-to-9-0/m-p/629648#M15304</guid>
      <dc:creator>pavankumarh</dc:creator>
      <dc:date>2023-02-05T14:04:40Z</dc:date>
    </item>
  </channel>
</rss>

