<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: kvstore issue in Splunk Enterprise</title>
    <link>https://community.splunk.com/t5/Splunk-Enterprise/kvstore-issue/m-p/741214#M21822</link>
    <description>&lt;P&gt;I see that you are running splunk on windows?&lt;/P&gt;&lt;P&gt;I haven’t so much experience how window’s internals works in current versions, but are you sure that splunk can use all that added memory without additional configuration? E.g. in Linux you must run at least disable boot-start and re-enable it again. Otherwise systemd didn’t know that splunk is allowed to use that additional memory.&lt;/P&gt;</description>
    <pubDate>Sat, 08 Mar 2025 15:21:38 GMT</pubDate>
    <dc:creator>isoutamo</dc:creator>
    <dc:date>2025-03-08T15:21:38Z</dc:date>
    <item>
      <title>kvstore issue</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/kvstore-issue/m-p/741190#M21813</link>
      <description>&lt;P&gt;Hello Splunkers!!&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;We are experiencing frequent &lt;STRONG&gt;KV Store crashes&lt;/STRONG&gt;, which are causing all reports to stop functioning. The error message observed is:&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;"[ReplBatcher] out of memory."&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;This issue is significantly impacting our operations, as many critical reports rely on KV Store for data retrieval and processing. Please help me to get it fix.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="uagraw01_0-1741421379763.png" style="width: 400px;"&gt;&lt;img src="https://community.splunk.com/t5/image/serverpage/image-id/38050iD883E69022FE92AF/image-size/medium?v=v2&amp;amp;px=400" role="button" title="uagraw01_0-1741421379763.png" alt="uagraw01_0-1741421379763.png" /&gt;&lt;/span&gt;&lt;BR /&gt;&lt;BR /&gt;Thanks in advance!!&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sat, 08 Mar 2025 08:13:01 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/kvstore-issue/m-p/741190#M21813</guid>
      <dc:creator>uagraw01</dc:creator>
      <dc:date>2025-03-08T08:13:01Z</dc:date>
    </item>
    <item>
      <title>Re: kvstore issue</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/kvstore-issue/m-p/741191#M21814</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.splunk.com/t5/user/viewprofilepage/user-id/70277"&gt;@uagraw01&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;It sounds like your Splunk server is running out of RAM.&lt;/P&gt;&lt;P&gt;Please could you confirm how much RAM your server has,&amp;nbsp;you could run the following and let us know what is returned?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;index=_introspection  host=YourHostname component=HostWide earliest=-60m
| dedup data.instance_guid
|  table data.mem*&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;and&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;| rest /services/server/info splunk_server=local
| table guid host physicalMemoryMB&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Also, have you recently added a large number of KV Store objects which might have caused the memory usage to grow quickly?&amp;nbsp;&lt;/P&gt;&lt;P&gt;I think the below query should show how big the KV Store is, please let us know what you get back&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;| rest /services/server/introspection/kvstore/collectionstats
| mvexpand data
| spath input=data
| rex field=ns "(?&amp;lt;App&amp;gt;.*)\.(?&amp;lt;Collection&amp;gt;.*)"
| eval dbsize=round(size/1024/1024, 2)
| eval indexsize=round(totalIndexSize/1024/1024, 2)
| stats first(count) AS "Number of Objects" first(nindexes) AS Accelerations first(indexsize) AS "Acceleration Size (MB)" first(dbsize) AS "Collection Size (MB)" by App, Collection&lt;/LI-CODE&gt;&lt;P&gt;It could be that you either need to increase RAM to accommodate the demand on the server.&lt;/P&gt;&lt;P&gt;Please let me know how you get on and consider adding karma to this or any other answer if it has helped.&lt;BR /&gt;Regards&lt;/P&gt;&lt;P&gt;Will&lt;/P&gt;</description>
      <pubDate>Sat, 08 Mar 2025 08:18:19 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/kvstore-issue/m-p/741191#M21814</guid>
      <dc:creator>livehybrid</dc:creator>
      <dc:date>2025-03-08T08:18:19Z</dc:date>
    </item>
    <item>
      <title>Re: kvstore issue</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/kvstore-issue/m-p/741192#M21815</link>
      <description>&lt;P&gt;Hey Will, &lt;a href="https://community.splunk.com/t5/user/viewprofilepage/user-id/170906"&gt;@livehybrid&lt;/a&gt;, you’re even faster than GPT! &lt;span class="lia-unicode-emoji" title=":grinning_face_with_smiling_eyes:"&gt;😄&lt;/span&gt;&lt;/P&gt;&lt;P&gt;We've already upgraded our RAM from 32GB to 64GB.&lt;/P&gt;</description>
      <pubDate>Sat, 08 Mar 2025 08:24:53 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/kvstore-issue/m-p/741192#M21815</guid>
      <dc:creator>uagraw01</dc:creator>
      <dc:date>2025-03-08T08:24:53Z</dc:date>
    </item>
    <item>
      <title>Re: kvstore issue</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/kvstore-issue/m-p/741196#M21816</link>
      <description>&lt;P&gt;&lt;a href="https://community.splunk.com/t5/user/viewprofilepage/user-id/70277"&gt;@uagraw01&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;DIV&gt;Upgrading from 32GB to 64GB RAM means memory is no longer the main issue. But since the [ReplBatcher] out of memory error is still happening, the problem is likely elsewhere.&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;Check &lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;mongod&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt; memory usage during a crash: On Linux, run &lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;top&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt; or &lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;htop&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt; and sort by memory (RES column) to see how much &lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;mongod&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt; is consuming. &amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;Confirm no OS-level limits are capping it: Check &lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;ulimit -v&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt; (virtual memory) for the Splunk user. It should be &lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;unlimited&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt; or very high.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/DIV&gt;</description>
      <pubDate>Sat, 08 Mar 2025 09:17:22 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/kvstore-issue/m-p/741196#M21816</guid>
      <dc:creator>kiran_panchavat</dc:creator>
      <dc:date>2025-03-08T09:17:22Z</dc:date>
    </item>
    <item>
      <title>Re: kvstore issue</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/kvstore-issue/m-p/741197#M21817</link>
      <description>&lt;P&gt;Ha&amp;nbsp;&lt;a href="https://community.splunk.com/t5/user/viewprofilepage/user-id/70277"&gt;@uagraw01&lt;/a&gt;&amp;nbsp;you caught me at a good time &lt;span class="lia-unicode-emoji" title=":winking_face:"&gt;😉&lt;/span&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Sounds like RAM shouldnt really be an issue then, although it is possible to adjust how much memory mongo can use with server.conf/[kvstore]/percRAMForCache (See&amp;nbsp;&lt;A href="https://docs.splunk.com/Documentation/Splunk/latest/Admin/Serverconf?_gl=1*homgau*_ga*NzI2Njg4NjMzLjE2NTUzOTI4OTQ.*_gid*MTIyOTUyNTY3Mi4xNjU1MzkyODk0&amp;amp;_ga=2.206629523.1229525672.1655392894-726688633.1655392894#:~:text=splunkd_stop_timeout%27.%0A*%20Default%3A%20false-,percRAMForCache,-%3D%20%3Cpositive%20integer%3E%0A*%20The" target="_blank"&gt;https://docs.splunk.com/Documentation/Splunk/latest/Admin/Serverconf?_gl=1*homgau*_ga*NzI2Njg4NjMzLjE2NTUzOTI4OTQ.*_gid*MTIyOTUyNTY3Mi4xNjU1MzkyODk0&amp;amp;_ga=2.206629523.1229525672.1655392894-726688633.1655392894#:~:text=splunkd_stop_timeout%27.%0A*%20Default%3A%20false-,percRAMForCache,-%3D%20%3Cpositive%20integer%3E%0A*%20The&lt;/A&gt;)&lt;/P&gt;&lt;P&gt;You could adjust this and see if this resolves the issue, Its 15% by default.&lt;/P&gt;&lt;P&gt;The other thing I was wondering is if there are any high memory operations against KVStore being done when it crashes that might be causing more-than-usual memory usage? Are you using DB Connect on the server, or are any certain modular inputs executing at the time of the issue?&lt;/P&gt;&lt;P&gt;Please let me know how you get on and consider adding karma to this or any other answer if it has helped.&lt;BR /&gt;Regards&lt;/P&gt;&lt;P&gt;Will&lt;/P&gt;</description>
      <pubDate>Sat, 08 Mar 2025 09:19:53 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/kvstore-issue/m-p/741197#M21817</guid>
      <dc:creator>livehybrid</dc:creator>
      <dc:date>2025-03-08T09:19:53Z</dc:date>
    </item>
    <item>
      <title>Re: kvstore issue</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/kvstore-issue/m-p/741198#M21818</link>
      <description>&lt;P&gt;Ha&amp;nbsp;&lt;a href="https://community.splunk.com/t5/user/viewprofilepage/user-id/70277"&gt;@uagraw01&lt;/a&gt;&amp;nbsp;you caught me at a good time &lt;span class="lia-unicode-emoji" title=":winking_face:"&gt;😉&lt;/span&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Sounds like RAM shouldnt really be an issue then, although it is possible to adjust how much memory mongo can use with server.conf/[kvstore]/percRAMForCache (See&amp;nbsp;&lt;A href="https://docs.splunk.com/Documentation/Splunk/latest/Admin/Serverconf?_gl=1*homgau*_ga*NzI2Njg4NjMzLjE2NTUzOTI4OTQ.*_gid*MTIyOTUyNTY3Mi4xNjU1MzkyODk0&amp;amp;_ga=2.206629523.1229525672.1655392894-726688633.1655392894#:~:text=splunkd_stop_timeout%27.%0A*%20Default%3A%20false-,percRAMForCache,-%3D%20%3Cpositive%20integer%3E%0A*%20The" target="_blank"&gt;https://docs.splunk.com/Documentation/Splunk/latest/Admin/Serverconf?_gl=1*homgau*_ga*NzI2Njg4NjMzLjE2NTUzOTI4OTQ.*_gid*MTIyOTUyNTY3Mi4xNjU1MzkyODk0&amp;amp;_ga=2.206629523.1229525672.1655392894-726688633.1655392894#:~:text=splunkd_stop_timeout%27.%0A*%20Default%3A%20false-,percRAMForCache,-%3D%20%3Cpositive%20integer%3E%0A*%20The&lt;/A&gt;)&lt;/P&gt;&lt;P&gt;You could adjust this and see if this resolves the issue, Its 15% by default.&lt;/P&gt;&lt;P&gt;The other thing I was wondering is if there are any high memory operations against KVStore being done when it crashes that might be causing more-than-usual memory usage? Are you using DB Connect on the server, or are any certain modular inputs executing at the time of the issue?&lt;/P&gt;&lt;P&gt;Please let me know how you get on and consider adding karma to this or any other answer if it has helped.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Will&lt;/P&gt;</description>
      <pubDate>Sat, 08 Mar 2025 09:22:32 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/kvstore-issue/m-p/741198#M21818</guid>
      <dc:creator>livehybrid</dc:creator>
      <dc:date>2025-03-08T09:22:32Z</dc:date>
    </item>
    <item>
      <title>Re: kvstore issue</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/kvstore-issue/m-p/741199#M21819</link>
      <description>&lt;P&gt;&lt;a href="https://community.splunk.com/t5/user/viewprofilepage/user-id/70277"&gt;@uagraw01&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;DIV&gt;&lt;UL&gt;&lt;LI&gt;&lt;DIV class=""&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;Is &lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;mongod&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt; using a small fraction of the 64GB (e.g., stuck at 4GB or 8GB) before crashing?&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;/LI&gt;&lt;LI&gt;&lt;DIV class=""&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;Any ulimit restrictions? &amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;If capped, increase the ulimit (e.g., edit /etc/security/limits.conf to set splunk - memlock unlimited and reboot or reapply). MongoDB (used by KV Store) typically uses up to 50% of system RAM minus 1GB for its working set by default. With 64GB, it should have ~31GB available, ensure it’s not artificially limited.&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;Open &lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;$SPLUNK_HOME/var/log/splunk/mongod.log&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt; and look for the &lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;[ReplBatcher] out of memory&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt; error. Note the timestamp and surrounding lines.&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;Cross-check &lt;SPAN class=""&gt;$SPLUNK_HOME/var/log/splunk/splunkd.log&lt;/SPAN&gt; for KV Store restart attempts or related errors.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/LI&gt;&lt;/OL&gt;&lt;DIV&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;The &lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;[ReplBatcher]&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt; component handles replication in KV Store, and an "out of memory" error here suggests it’s choking on the replication workload. With 64GB, it shouldn’t be a hardware limit, so tune the configuration.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;Check &lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;server.conf&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt; (&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;$SPLUNK_HOME/etc/system/local/server.conf&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;)&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&lt;PRE&gt;[kvstore]
oplogSize = &amp;lt;current value&amp;gt;&lt;/PRE&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;PRE&gt;&amp;nbsp;oplogSize = &amp;lt;integer&amp;gt;&lt;BR /&gt;* The size of the replication operation log, in megabytes, for environments&lt;BR /&gt;with search head clustering or search head pooling.&lt;BR /&gt;In a standalone environment, 20% of this size is used.&lt;BR /&gt;* After the KV Store has created the oplog for the first time, changing this&lt;BR /&gt;setting does NOT affect the size of the oplog. A full backup and restart&lt;BR /&gt;of the KV Store is required.&lt;BR /&gt;* Do not change this setting without first consulting with Splunk Support.&lt;BR /&gt;* Default: 1000 (1GB)&lt;/PRE&gt;&lt;DIV&gt;Default is 1000 MB (1GB). Post-RAM upgrade, this might be too small for your data throughput.&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&lt;A href="https://docs.splunk.com/Documentation/Splunk/latest/Admin/Serverconf?_gl=1*homgau*_ga*NzI2Njg4NjMzLjE2NTUzOTI4OTQ.*_gid*MTIyOTUyNTY3Mi4xNjU1MzkyODk0&amp;amp;_ga=2.206629523.1229525672.1655392894-726688633.1655392894" target="_blank" rel="noopener"&gt;https://docs.splunk.com/Documentation/Splunk/latest/Admin/Serverconf?_gl=1*homgau*_ga*NzI2Njg4NjMzLjE2NTUzOTI4OTQ.*_gid*MTIyOTUyNTY3Mi4xNjU1MzkyODk0&amp;amp;_ga=2.206629523.1229525672.1655392894-726688633.1655392894&lt;/A&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;Run &lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;.&lt;STRONG&gt;/splunk show kvstore-status&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt; to see replication lag or errors.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;Restart Splunk (&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;./splunk restart&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;) and monitor if crashes decrease. A larger oplog gives replication more buffer space, reducing memory pressure.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;</description>
      <pubDate>Sat, 08 Mar 2025 09:23:14 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/kvstore-issue/m-p/741199#M21819</guid>
      <dc:creator>kiran_panchavat</dc:creator>
      <dc:date>2025-03-08T09:23:14Z</dc:date>
    </item>
    <item>
      <title>Re: kvstore issue</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/kvstore-issue/m-p/741200#M21820</link>
      <description>&lt;P&gt;&lt;a href="https://community.splunk.com/t5/user/viewprofilepage/user-id/70277"&gt;@uagraw01&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;DIV&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;Even with 64GB, an excessively large or poorly managed KV Store dataset could overwhelm &lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;mongod&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;Check the KV Store data size: &lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;du -sh /opt/splunk/var/lib/splunk/kvstore/mongo/&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;Look in &lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;collections.conf&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt; across apps (&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;$SPLUNK_HOME/etc/apps/*/local/&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;) to identify what’s stored.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV class=""&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;Query collection sizes via Splunk REST API:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class=""&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;PRE&gt;| rest /servicesNS/-/-/storage/collections/data/&amp;lt;collection_name&amp;gt; | stats count&lt;/PRE&gt;&lt;/DIV&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;A href="https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/kvstore/usetherestapitomanagekv/" target="_blank" rel="noopener"&gt;https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/kvstore/usetherestapitomanagekv/&lt;/A&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class=""&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class=""&gt;&lt;DIV&gt;Is the KV Store directory&amp;nbsp;getting too large (e.g., 20GB+)?&lt;/DIV&gt;&lt;DIV&gt;&lt;DIV&gt;Any single collection with millions of records or huge documents?&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;If a collection is oversized, archive or purge old data (e.g., &lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;./splunk clean kvstore --collection &amp;lt;name&amp;gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt; after backing up).&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;DIV&gt;Optimize apps to store less in KV Store (e.g., reduce field counts or batch updates).&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;</description>
      <pubDate>Sat, 08 Mar 2025 09:26:57 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/kvstore-issue/m-p/741200#M21820</guid>
      <dc:creator>kiran_panchavat</dc:creator>
      <dc:date>2025-03-08T09:26:57Z</dc:date>
    </item>
    <item>
      <title>Re: kvstore issue</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/kvstore-issue/m-p/741214#M21822</link>
      <description>&lt;P&gt;I see that you are running splunk on windows?&lt;/P&gt;&lt;P&gt;I haven’t so much experience how window’s internals works in current versions, but are you sure that splunk can use all that added memory without additional configuration? E.g. in Linux you must run at least disable boot-start and re-enable it again. Otherwise systemd didn’t know that splunk is allowed to use that additional memory.&lt;/P&gt;</description>
      <pubDate>Sat, 08 Mar 2025 15:21:38 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/kvstore-issue/m-p/741214#M21822</guid>
      <dc:creator>isoutamo</dc:creator>
      <dc:date>2025-03-08T15:21:38Z</dc:date>
    </item>
  </channel>
</rss>

