<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Queue Constantly being full in Splunk Enterprise</title>
    <link>https://community.splunk.com/t5/Splunk-Enterprise/Queue-Constantly-being-full/m-p/752599#M23033</link>
    <description>&lt;PRE&gt;* Default (Splunk Enterprise): 0 (unlimited)
* Default (Splunk Universal Forwarder): 256&lt;/PRE&gt;&lt;P&gt;So for a HF it's not a thruput cap issue (unless you reconfigured your limits). If your outputs are blocking look downstream. Either for network problems or clogged indexers.&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Tue, 02 Sep 2025 11:12:53 GMT</pubDate>
    <dc:creator>PickleRick</dc:creator>
    <dc:date>2025-09-02T11:12:53Z</dc:date>
    <item>
      <title>Queue Constantly being full</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/Queue-Constantly-being-full/m-p/752271#M22975</link>
      <description>&lt;P&gt;Hello everyone! I am a new splunk user and I am noticing that my splunk HF is constantly having a high p90 queue fill perc.&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;I ran the following search index=_internal host=&amp;lt;myhost&amp;gt; blocked=true and I am seeing max_size_kb of 500 - 10240 getting block. If I am not wrong, throughput for a HF is set at 256KBps.&lt;BR /&gt;&lt;BR /&gt;I looked into the server that is running the HF, but it does seem that seem to be having any high CPU/IOPS usage, is there any way I can troubleshoot this?&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 27 Aug 2025 06:23:53 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/Queue-Constantly-being-full/m-p/752271#M22975</guid>
      <dc:creator>KJL</dc:creator>
      <dc:date>2025-08-27T06:23:53Z</dc:date>
    </item>
    <item>
      <title>Re: Queue Constantly being full</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/Queue-Constantly-being-full/m-p/752277#M22980</link>
      <description>&lt;P&gt;&lt;a href="https://community.splunk.com/t5/user/viewprofilepage/user-id/312655"&gt;@KJL&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;You mentioned seeing blocked=true in _internal logs with max_size_kb ranging from 500 to 10240. That’s a sign that Splunk is throttling because the queues are full.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;If your HF is set at 256KBps, which can be a bottleneck if you're forwarding a lot of data.&lt;/P&gt;&lt;P&gt;To start with, try increase that to 2048 or 0(no cap) depending on your system’s/network capacity.&lt;BR /&gt;&lt;BR /&gt;Also verify your connectivity towards receiving end(intermediate HF/Indexer). If there is n/w latency or slow performance at receiving end, then queues will back up.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;Prewin&lt;BR /&gt;If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!&lt;/P&gt;</description>
      <pubDate>Wed, 27 Aug 2025 07:19:50 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/Queue-Constantly-being-full/m-p/752277#M22980</guid>
      <dc:creator>PrewinThomas</dc:creator>
      <dc:date>2025-08-27T07:19:50Z</dc:date>
    </item>
    <item>
      <title>Re: Queue Constantly being full</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/Queue-Constantly-being-full/m-p/752288#M22986</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.splunk.com/t5/user/viewprofilepage/user-id/312655"&gt;@KJL&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;What is the name of the queue that is being blocked?&lt;/P&gt;&lt;P&gt;Do you know the amount of data being sent to this instance? (is the load spread across other HFs?)&amp;nbsp;&lt;/P&gt;&lt;P&gt;Have you recently installed/updated any apps or applied new config, or increased ingestion?&lt;/P&gt;&lt;P&gt;&lt;span class="lia-unicode-emoji" title=":glowing_star:"&gt;🌟&lt;/span&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;Did this answer help you?&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;If so, please consider:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;Adding karma to show it was useful&lt;/LI&gt;&lt;LI&gt;Marking it as the solution if it resolved your issue&lt;/LI&gt;&lt;LI&gt;Commenting if you need any clarification&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;Your feedback encourages the volunteers in this community to continue contributing&lt;/P&gt;</description>
      <pubDate>Wed, 27 Aug 2025 08:37:57 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/Queue-Constantly-being-full/m-p/752288#M22986</guid>
      <dc:creator>livehybrid</dc:creator>
      <dc:date>2025-08-27T08:37:57Z</dc:date>
    </item>
    <item>
      <title>Re: Queue Constantly being full</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/Queue-Constantly-being-full/m-p/752598#M23032</link>
      <description>&lt;P&gt;Thank you for your replies&amp;nbsp;&lt;a href="https://community.splunk.com/t5/user/viewprofilepage/user-id/170906"&gt;@livehybrid&lt;/a&gt;&amp;nbsp;&amp;nbsp;&lt;a href="https://community.splunk.com/t5/user/viewprofilepage/user-id/28010"&gt;@PrewinThomas&lt;/a&gt;&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;PRE&gt;index=_internal source="*license_usage.log" type=Usage h="&amp;lt;forwader name&amp;gt;"

| rename _time as Date

| eval Date=strftime(Date,"%b-%y")

| stats sum(b) as license by Date h

| eval licenseGB =round(license/1024/1024/1024,3)

| rename licenseGB as TB&lt;/PRE&gt;&lt;P&gt;&lt;A href="https://community.splunk.com/t5/Installation/How-to-calculate-data-ingestion-from-a-specific-Heavy-Forwarder/m-p/670752" target="_blank" rel="noopener"&gt;How to calculate data ingestion from a specific He... - Splunk Community&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;using this search from the community, it seems that my heavy forwarder with throttling issue is forwarding about 16-28 GB daily as opposed to another heavy forwarder forwarding about &amp;gt;2GB daily. Currently in the limits.conf file, throughput rate is configured at 0 (hence no limit). Is there any way I can still configure the heavy forwarder to take on such a load of 16GB daily?&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;Increasing the hardware on the heavy forwarder did not seem to do the trick so not sure if I can reconfigure the heavy forwarder limits. Additionally, is 16GB too high? not sure what is the benchmark for this as I am rather new.&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;Answering the question on the queues that were getting choked up, "1 - Parsing Queue 2 - Aggregation Queue 3 - Typing Queue 4 - Indexing Queue 5 - TcpOut Queue", all 5 queues are constantly at near 100%.&amp;nbsp;&lt;BR /&gt;query used:&amp;nbsp;&lt;/P&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;PRE&gt;index=_internal source=*metrics.log sourcetype=splunkd group=queue (name=parsingqueue OR name=aggqueue OR name=typingqueue OR name=indexqueue OR name=tcpout* OR name=tcpin_queue) host IN (&amp;lt;your host&amp;gt;)&lt;BR /&gt;&lt;BR /&gt;| replace tcpout* with tcpoutqueue in name&lt;BR /&gt;&lt;BR /&gt;| eval name=case(name=="tcpin_queue","0 - TcpIn Queue",name=="aggqueue","2 - Aggregation Queue",name=="indexqueue","4 - Indexing Queue",name=="parsingqueue","1 - Parsing Queue",name=="typingqueue","3 - Typing Queue",name=="tcpoutqueue","5 - TcpOut Queue")&lt;BR /&gt;&lt;BR /&gt;| eval max=if(isnotnull(max_size_kb),max_size_kb,max_size)&lt;BR /&gt;&lt;BR /&gt;| eval curr=if(isnotnull(current_size_kb),current_size_kb,current_size)&lt;BR /&gt;&lt;BR /&gt;| eval fill_perc=round((curr/max)*100,2)&lt;BR /&gt;&lt;BR /&gt;| timechart span=30m p90(fill_perc) AS fill_perc by name&lt;/PRE&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;</description>
      <pubDate>Tue, 02 Sep 2025 07:56:51 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/Queue-Constantly-being-full/m-p/752598#M23032</guid>
      <dc:creator>KJL</dc:creator>
      <dc:date>2025-09-02T07:56:51Z</dc:date>
    </item>
    <item>
      <title>Re: Queue Constantly being full</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/Queue-Constantly-being-full/m-p/752599#M23033</link>
      <description>&lt;PRE&gt;* Default (Splunk Enterprise): 0 (unlimited)
* Default (Splunk Universal Forwarder): 256&lt;/PRE&gt;&lt;P&gt;So for a HF it's not a thruput cap issue (unless you reconfigured your limits). If your outputs are blocking look downstream. Either for network problems or clogged indexers.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 02 Sep 2025 11:12:53 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/Queue-Constantly-being-full/m-p/752599#M23033</guid>
      <dc:creator>PickleRick</dc:creator>
      <dc:date>2025-09-02T11:12:53Z</dc:date>
    </item>
    <item>
      <title>Re: Queue Constantly being full</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/Queue-Constantly-being-full/m-p/752621#M23034</link>
      <description>Here is excellent conf presentation how to find why queue is full. &lt;A href="https://conf.splunk.com/files/2019/slides/FN1570.pdf" target="_blank"&gt;https://conf.splunk.com/files/2019/slides/FN1570.pdf&lt;/A&gt;</description>
      <pubDate>Wed, 03 Sep 2025 05:38:47 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/Queue-Constantly-being-full/m-p/752621#M23034</guid>
      <dc:creator>isoutamo</dc:creator>
      <dc:date>2025-09-03T05:38:47Z</dc:date>
    </item>
  </channel>
</rss>

