<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Resource issues with indexing and queues in Splunk Enterprise</title>
    <link>https://community.splunk.com/t5/Splunk-Enterprise/Resource-issues-with-indexing-and-queues/m-p/540795#M5011</link>
    <description>&lt;P&gt;Hi all, so I am facing this issue with what seems to be delayed/not receiving logs from the UFs.&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="d_lim_1-1613984409525.png" style="width: 400px;"&gt;&lt;img src="https://community.splunk.com/t5/image/serverpage/image-id/13001i349B8190842FB37D/image-size/medium?v=v2&amp;amp;px=400" role="button" title="d_lim_1-1613984409525.png" alt="d_lim_1-1613984409525.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;This is the current queue, we have gone from 1, to 2, to now 3 for the indexer's parallelIngestionPipeline settings.&lt;/P&gt;&lt;P&gt;On the indexer, the index queue is always full and is affecting the downstream from the 2 HFs.&amp;nbsp;&lt;/P&gt;&lt;P&gt;There are about 16 intermediate forwarders sending to HF001, and HF002 is mainly doing API calls to pull data.&lt;/P&gt;&lt;P&gt;The iops for the indexer is around 1600, cpu usage 50% and memory 31%.&lt;/P&gt;&lt;P&gt;Any recommendations on what we can do to improve this, eg. additional indexer? Thanks.&lt;/P&gt;</description>
    <pubDate>Mon, 22 Feb 2021 09:02:05 GMT</pubDate>
    <dc:creator>d_lim</dc:creator>
    <dc:date>2021-02-22T09:02:05Z</dc:date>
    <item>
      <title>Resource issues with indexing and queues</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/Resource-issues-with-indexing-and-queues/m-p/540795#M5011</link>
      <description>&lt;P&gt;Hi all, so I am facing this issue with what seems to be delayed/not receiving logs from the UFs.&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="d_lim_1-1613984409525.png" style="width: 400px;"&gt;&lt;img src="https://community.splunk.com/t5/image/serverpage/image-id/13001i349B8190842FB37D/image-size/medium?v=v2&amp;amp;px=400" role="button" title="d_lim_1-1613984409525.png" alt="d_lim_1-1613984409525.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;This is the current queue, we have gone from 1, to 2, to now 3 for the indexer's parallelIngestionPipeline settings.&lt;/P&gt;&lt;P&gt;On the indexer, the index queue is always full and is affecting the downstream from the 2 HFs.&amp;nbsp;&lt;/P&gt;&lt;P&gt;There are about 16 intermediate forwarders sending to HF001, and HF002 is mainly doing API calls to pull data.&lt;/P&gt;&lt;P&gt;The iops for the indexer is around 1600, cpu usage 50% and memory 31%.&lt;/P&gt;&lt;P&gt;Any recommendations on what we can do to improve this, eg. additional indexer? Thanks.&lt;/P&gt;</description>
      <pubDate>Mon, 22 Feb 2021 09:02:05 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/Resource-issues-with-indexing-and-queues/m-p/540795#M5011</guid>
      <dc:creator>d_lim</dc:creator>
      <dc:date>2021-02-22T09:02:05Z</dc:date>
    </item>
    <item>
      <title>Re: Resource issues with indexing and queues</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/Resource-issues-with-indexing-and-queues/m-p/540839#M5013</link>
      <description>&lt;P&gt;I full indexing queue means the act of writing to disk is taking too long.&amp;nbsp; Adding pipelines just makes that worse by creating more threads that try to write to disk.&amp;nbsp; Something in the storage system is causing delays and correcting that problem should alleviate the queue problem.&lt;/P&gt;</description>
      <pubDate>Mon, 22 Feb 2021 14:02:06 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/Resource-issues-with-indexing-and-queues/m-p/540839#M5013</guid>
      <dc:creator>richgalloway</dc:creator>
      <dc:date>2021-02-22T14:02:06Z</dc:date>
    </item>
  </channel>
</rss>

