<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: How to improve indexing thruput if replication queue is full? in Getting Data In</title>
    <link>https://community.splunk.com/t5/Getting-Data-In/How-to-improve-indexing-thruput-if-replication-queue-is-full/m-p/747291#M118753</link>
    <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.splunk.com/t5/user/viewprofilepage/user-id/118813"&gt;@hrawat&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;&lt;P&gt;two little questions:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;how many CPUs have you on your Indexers?&lt;/LI&gt;&lt;LI&gt;what's the throughput on the storage of your indexers? in other words, have you iowait and delayed searches issues?&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;probably the problem is related to an insufficient processing capacity, so the easiest solution is adding some CPUs.&lt;/P&gt;&lt;P&gt;If instead the problema is the second, the only solution is changing the storage that hasn't a sufficient IOPS: Splunk requires at least 800 IOPS.&lt;/P&gt;&lt;P&gt;Ciao.&lt;/P&gt;&lt;P&gt;Giuseppe&lt;/P&gt;</description>
    <pubDate>Sat, 31 May 2025 09:42:46 GMT</pubDate>
    <dc:creator>gcusello</dc:creator>
    <dc:date>2025-05-31T09:42:46Z</dc:date>
    <item>
      <title>How to improve indexing thruput if replication queue is full?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/How-to-improve-indexing-thruput-if-replication-queue-is-full/m-p/747285#M118752</link>
      <description>&lt;P&gt;&lt;BR /&gt;Here are the configs for &lt;STRONG&gt;on-prem customers&lt;/STRONG&gt; willing to apply and avoid adding more hardware cost.&lt;BR /&gt;9.4.0 and above most of the indexing configs are automated that’s why dropped from 9.4.0 suggested list.&lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt;Note:&lt;/STRONG&gt; Assuming &lt;A href="https://community.splunk.com/t5/Knowledge-Management/Find-the-target-indexer-node-responsible-for-causing-indexqueue/m-p/686921#M10019" target="_self"&gt;replication queue is full for most of the indexers and as a result indexing pipeline&lt;/A&gt; is also full however indexers do have plenty of idle cpu and IO is not an issue.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;STRONG&gt;On-prem Splunk version 9.4.0 and above&lt;/STRONG&gt;&lt;BR /&gt;Indexes.conf&lt;/P&gt;&lt;P&gt;&lt;SPAN class=""&gt;[default] &lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN class=""&gt;maxMemMB=100&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN class=""&gt;Server.conf &lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN class=""&gt;[queue]&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN class=""&gt;autoAdjustQueue=true&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Splunk version 9.1 to 9.3.x&lt;/STRONG&gt;&lt;BR /&gt;Indexes.conf&lt;BR /&gt;[default]&lt;BR /&gt;maxMemMB=100&lt;BR /&gt;maxConcurrentOptimizes=2&lt;BR /&gt;maxRunningProcessGroups=32&lt;BR /&gt;processTrackerServiceInterval=0&lt;BR /&gt;&lt;BR /&gt;Server.conf&lt;BR /&gt;[general]&lt;BR /&gt;parallelIngestionPipelines = 4&lt;BR /&gt;[queue=indexQueue]&lt;BR /&gt;maxSize=500MB&lt;BR /&gt;[queue=parsingQueue]&lt;BR /&gt;maxSize=500MB&lt;BR /&gt;[queue=httpInputQ]&lt;BR /&gt;maxSize = 500MB&lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt;maxMemMB,&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;will try to minimize creation of tsidx files as much as possible at the cost of higher memory usage by mothership(main splunkd).&lt;/SPAN&gt;&lt;BR /&gt;&lt;STRONG&gt;maxConcurrentOptimizes,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN&gt;on indexing side it’s internally 1 no matter what the setting is set to. But on target replication side launching more splunk-optimize processes means pausing receiver until that splunk-optimize process is launched. So reducing it to keep receiver do more of indexing work than launching splunk-optimize process. With 9.4.0, both source (indexprocessor) and target(replication in thread) will internally auto adjust it to 1.&lt;/SPAN&gt;&lt;BR /&gt;&lt;STRONG&gt;maxRunningProcessGroups,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN&gt;allow more splunk-optimize processes concurrently. With 9.4.0, it's auto.&lt;/SPAN&gt;&lt;BR /&gt;&lt;STRONG&gt;processTrackerServiceInterval,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN&gt;run splunk-optimize processes ASAP. With 9.4.0, you don't have to change.&lt;/SPAN&gt;&lt;BR /&gt;&lt;STRONG&gt;parallelIngestionPipelines,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN&gt;have more receivers on target side. With 9.4.0, you can enable auto scaling of&amp;nbsp; pipelines.&lt;BR /&gt;&lt;/SPAN&gt;&lt;STRONG&gt;maxSize,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN&gt;don’t let huge batch ingestion by HEC client block queues and receive 503. With 9.4.0&amp;nbsp;&lt;SPAN class=""&gt;autoAdjustQueue set to true, it's no more a fix size queue.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Sat, 20 Sep 2025 10:47:39 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/How-to-improve-indexing-thruput-if-replication-queue-is-full/m-p/747285#M118752</guid>
      <dc:creator>hrawat</dc:creator>
      <dc:date>2025-09-20T10:47:39Z</dc:date>
    </item>
    <item>
      <title>Re: How to improve indexing thruput if replication queue is full?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/How-to-improve-indexing-thruput-if-replication-queue-is-full/m-p/747291#M118753</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.splunk.com/t5/user/viewprofilepage/user-id/118813"&gt;@hrawat&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;&lt;P&gt;two little questions:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;how many CPUs have you on your Indexers?&lt;/LI&gt;&lt;LI&gt;what's the throughput on the storage of your indexers? in other words, have you iowait and delayed searches issues?&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;probably the problem is related to an insufficient processing capacity, so the easiest solution is adding some CPUs.&lt;/P&gt;&lt;P&gt;If instead the problema is the second, the only solution is changing the storage that hasn't a sufficient IOPS: Splunk requires at least 800 IOPS.&lt;/P&gt;&lt;P&gt;Ciao.&lt;/P&gt;&lt;P&gt;Giuseppe&lt;/P&gt;</description>
      <pubDate>Sat, 31 May 2025 09:42:46 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/How-to-improve-indexing-thruput-if-replication-queue-is-full/m-p/747291#M118753</guid>
      <dc:creator>gcusello</dc:creator>
      <dc:date>2025-05-31T09:42:46Z</dc:date>
    </item>
    <item>
      <title>Re: How to improve indexing thruput if replication queue is full?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/How-to-improve-indexing-thruput-if-replication-queue-is-full/m-p/747295#M118755</link>
      <description>&lt;P&gt;&lt;a href="https://community.splunk.com/t5/user/viewprofilepage/user-id/118813"&gt;@hrawat&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Further Insights on the Suggestion Shared by&amp;nbsp;&lt;a href="https://community.splunk.com/t5/user/viewprofilepage/user-id/161352"&gt;@gcusello&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;P&gt;It is recommended that indexers are provisioned with 12 to 48 CPU cores, each running at 2 GHz or higher, to ensure optimal performance.&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;The disk subsystem should support at least 800 IOPS, ideally using SSDs for hot and warm buckets to handle the indexing workload efficiently.&lt;/P&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&lt;A href="https://docs.splunk.com/Documentation/Splunk/latest/Capacity/Referencehardware" target="_blank" rel="noopener"&gt;https://docs.splunk.com/Documentation/Splunk/latest/Capacity/Referencehardware&lt;/A&gt;&amp;nbsp;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;P&gt;For environments still using traditional hard drives, prioritize models with higher rotational speeds, and lower average latency and seek times to maximize IOPS.&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;For further insights, refer to this guide on &lt;A class="" href="http://www.cmdln.org/2010/04/22/analyzing-io-performance-in-linux" target="_new" rel="noopener"&gt;Analyzing I/O Performance in Linux&lt;/A&gt;.&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;Note that insufficient disk I/O is one of the most common performance bottlenecks in Splunk deployments. It is crucial to thoroughly review disk subsystem requirements during hardware planning.&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;If the indexer's CPU resources exceed those of the standard reference architecture, it may be beneficial to tune &lt;EM&gt;parallelization settings&lt;/EM&gt; to further enhance performance for specific workloads.&lt;/P&gt;&lt;/LI&gt;&lt;/UL&gt;</description>
      <pubDate>Sat, 31 May 2025 10:21:24 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/How-to-improve-indexing-thruput-if-replication-queue-is-full/m-p/747295#M118755</guid>
      <dc:creator>kiran_panchavat</dc:creator>
      <dc:date>2025-05-31T10:21:24Z</dc:date>
    </item>
    <item>
      <title>Re: How to improve indexing thruput if replication queue is full?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/How-to-improve-indexing-thruput-if-replication-queue-is-full/m-p/747296#M118756</link>
      <description>&lt;P&gt;Added a note to&amp;nbsp; the original post that indexers are having no IO issues and plenty of idle cpu.&lt;/P&gt;</description>
      <pubDate>Sat, 31 May 2025 11:01:59 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/How-to-improve-indexing-thruput-if-replication-queue-is-full/m-p/747296#M118756</guid>
      <dc:creator>hrawat</dc:creator>
      <dc:date>2025-05-31T11:01:59Z</dc:date>
    </item>
    <item>
      <title>Re: How to improve indexing thruput if replication queue is full?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/How-to-improve-indexing-thruput-if-replication-queue-is-full/m-p/753345#M119611</link>
      <description>&lt;P&gt;A quick clarification on the 9.4.0 settings for server.conf, you have mentioned&lt;/P&gt;&lt;BLOCKQUOTE&gt;&lt;HR /&gt;&lt;P&gt;&lt;STRONG&gt;On-prem Splunk version 9.4.0 and above&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN class=""&gt;Indexes.conf&lt;BR /&gt;[default]&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN class=""&gt;maxMemMB=100&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN class=""&gt;Server.conf &lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN class=""&gt;[general]&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN class=""&gt;autoAdjustQueue=true&lt;/SPAN&gt;&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;The spec file for &lt;A href="https://docs.splunk.com/Documentation/Splunk/latest/Admin/Serverconf" target="_self"&gt;server.conf&lt;/A&gt; appears to show that autoAdjustQueue under the [queue] stanza, should it be under [queue]?&lt;/P&gt;&lt;P&gt;With the indexes.conf setting, does that number multiply out based on the number of indexes configured?&lt;BR /&gt;Should I be more cautious when having 1000 indexes configured vs having 100 indexes configured?&lt;BR /&gt;I'm unsure when the "max memory" usage might occur from that setting...&lt;/P&gt;&lt;P&gt;Thanks&lt;/P&gt;</description>
      <pubDate>Sat, 20 Sep 2025 08:42:38 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/How-to-improve-indexing-thruput-if-replication-queue-is-full/m-p/753345#M119611</guid>
      <dc:creator>gjanders</dc:creator>
      <dc:date>2025-09-20T08:42:38Z</dc:date>
    </item>
    <item>
      <title>Re: How to improve indexing thruput if replication queue is full?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/How-to-improve-indexing-thruput-if-replication-queue-is-full/m-p/753350#M119615</link>
      <description>&lt;P&gt;Thanks for pointing to mistake in stanza. Yes it has to be [queue].&lt;/P&gt;</description>
      <pubDate>Sat, 20 Sep 2025 10:49:37 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/How-to-improve-indexing-thruput-if-replication-queue-is-full/m-p/753350#M119615</guid>
      <dc:creator>hrawat</dc:creator>
      <dc:date>2025-09-20T10:49:37Z</dc:date>
    </item>
    <item>
      <title>Re: How to improve indexing thruput if replication queue is full?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/How-to-improve-indexing-thruput-if-replication-queue-is-full/m-p/753351#M119616</link>
      <description>&lt;P&gt;Yes `&lt;SPAN&gt;maxMemMB=100` will be applied to each index. You can set this config to high volume indexes instead of globally.&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Sat, 20 Sep 2025 10:52:50 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/How-to-improve-indexing-thruput-if-replication-queue-is-full/m-p/753351#M119616</guid>
      <dc:creator>hrawat</dc:creator>
      <dc:date>2025-09-20T10:52:50Z</dc:date>
    </item>
  </channel>
</rss>

