<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Slow indexer/receiver detection capability in Knowledge Management</title>
    <link>https://community.splunk.com/t5/Knowledge-Management/Slow-indexer-receiver-detection-capability/m-p/741150#M10384</link>
    <description>&lt;P&gt;Now logging includes output group from 9.4&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;03-06-2025 11:38:16.306 +1300 WARN AutoLoadBalancedConnectionStrategy [199656 TcpOutEloop] - Current dest host connection=10.231.218.59:9997, connid=1, oneTimeClient=0, _events.size()=53605, _refCount=1, _waitingAckQ.size()=0, _supportsACK=0, _lastHBRecvTime=Fri Mar 6 11:35:48 2025 for group=myindexers is using 31391960 bytes. Total tcpout queue size is 31457280. Warningcount=0&lt;/LI-CODE&gt;</description>
    <pubDate>Fri, 07 Mar 2025 16:56:26 GMT</pubDate>
    <dc:creator>hrawat</dc:creator>
    <dc:date>2025-03-07T16:56:26Z</dc:date>
    <item>
      <title>Slow indexer/receiver detection capability</title>
      <link>https://community.splunk.com/t5/Knowledge-Management/Slow-indexer-receiver-detection-capability/m-p/683768#M9963</link>
      <description>&lt;P&gt;9.1.3/9.2.1 onwards slow indexer/receiver detection capability is fully functional now (&lt;SPAN&gt;SPL-248188, SPL-248140).&lt;BR /&gt;&lt;/SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;A href="https://docs.splunk.com/Documentation/Splunk/9.2.1/ReleaseNotes/Fixedissues" target="_blank" rel="noopener"&gt;https://docs.splunk.com/Documentation/Splunk/9.2.1/ReleaseNotes/Fixedissues&lt;/A&gt;&lt;BR /&gt;You can enable it on forwarding side in outputs.conf&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;PRE&gt;maxSendQSize = &amp;lt;integer&amp;gt;
* The size of the tcpout client send buffer, in bytes.
  If tcpout client(indexer/receiver connection) send buffer is full,
  a new indexer is randomly selected from the list of indexers provided
  in the server setting of the target group stanza.
* This setting allows forwarder to switch to new indexer/receiver if current
  indexer/receiver is slow.
* A non-zero value means that max send buffer size is set.
* 0 means no limit on max send buffer size.
* Default: 0&lt;/PRE&gt;&lt;P&gt;Additionally 9.1.3/9.2.1 and above will correctly log target ipaddress causing tcpout blocking.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;WARN AutoLoadBalancedConnectionStrategy [xxxx TcpOutEloop] - Current dest host connection nn.nn.nn.nnn:9997, oneTimeClient=0, _events.size()=20, _refCount=2, _waitingAckQ.size()=4, _supportsACK=1, _lastHBRecvTime=Thu Jan 20 11:07:43 2024 is using 20214400 bytes. Total tcpout queue size is 26214400. Warningcount=20&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;Note: This config works correctly starting 9.1.3/9.2.1. &lt;FONT color="#FF0000"&gt;Do not use it with 9.2.0/9.1.0/9.1.1/9.1.2( there is incorrect calculation &lt;A href="https://community.splunk.com/t5/Getting-Data-In/Current-dest-host-connection-is-using-18446603427033668018-bytes/m-p/678842#M113450" target="_blank" rel="noopener"&gt;https://community.splunk.com/t5/Getting-Data-In/Current-dest-host-connection-is-using-18446603427033668018-bytes/m-p/678842#M113450&lt;/A&gt;).&lt;/FONT&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 10 Apr 2024 01:51:09 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Knowledge-Management/Slow-indexer-receiver-detection-capability/m-p/683768#M9963</guid>
      <dc:creator>hrawat</dc:creator>
      <dc:date>2024-04-10T01:51:09Z</dc:date>
    </item>
    <item>
      <title>Re: Slow indexer/receiver detection capability</title>
      <link>https://community.splunk.com/t5/Knowledge-Management/Slow-indexer-receiver-detection-capability/m-p/686945#M10020</link>
      <description>&lt;P&gt;This setting definitely looks useful for slow receivers, but how would I determine when to use it, and an appropriate value?&lt;/P&gt;&lt;P&gt;For example you have mentioned:&lt;/P&gt;&lt;PRE&gt;WARN AutoLoadBalancedConnectionStrategy [xxxx TcpOutEloop] - Current dest host connection nn.nn.nn.nnn:9997, oneTimeClient=0, _events.size()=20, _refCount=2, _waitingAckQ.size()=4, _supportsACK=1, _lastHBRecvTime=Thu Jan 20 11:07:43 2024 is using 20214400 bytes. Total tcpout queue size is 26214400. Warningcount=20&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;I note that you have Warningcount=20, a quick check in my environment shows Warningcount=1, if i'm just seeing the occasional warning I'm assuming tweaking this setting would be of minimal benefit?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Furthermore, how would I appropriately set the bytes value?&lt;/P&gt;&lt;P&gt;I'm assuming it's per-pipeline, and the variables involved might relate to volume per-second per-pipline, any other variables?&lt;/P&gt;&lt;P&gt;Any example of how this would be tuned and when?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks&lt;/P&gt;</description>
      <pubDate>Thu, 09 May 2024 01:35:39 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Knowledge-Management/Slow-indexer-receiver-detection-capability/m-p/686945#M10020</guid>
      <dc:creator>gjanders</dc:creator>
      <dc:date>2024-05-09T01:35:39Z</dc:date>
    </item>
    <item>
      <title>Re: Slow indexer/receiver detection capability</title>
      <link>https://community.splunk.com/t5/Knowledge-Management/Slow-indexer-receiver-detection-capability/m-p/686946#M10021</link>
      <description>&lt;P&gt;If warning count is 1, then it's not a big issue.&amp;nbsp;&lt;BR /&gt;What it indicates is out of maxQueueSize bytes tcpout queue, one connection has occupied a large space. Thus TcpOutputProcessor will get pauses.&amp;nbsp;maxQueueSize is per pipeline and is shared by all target connections per pipeline.&lt;BR /&gt;You may want to increase&amp;nbsp;maxQueueSize( double the size).&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 09 May 2024 02:13:38 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Knowledge-Management/Slow-indexer-receiver-detection-capability/m-p/686946#M10021</guid>
      <dc:creator>hrawat</dc:creator>
      <dc:date>2024-05-09T02:13:38Z</dc:date>
    </item>
    <item>
      <title>Re: Slow indexer/receiver detection capability</title>
      <link>https://community.splunk.com/t5/Knowledge-Management/Slow-indexer-receiver-detection-capability/m-p/686954#M10022</link>
      <description>&lt;P&gt;Thanks, I'll review the maxQueueSize&lt;/P&gt;&lt;P&gt;If the warning count was higher, such as 20 in your example.&lt;/P&gt;&lt;P&gt;What would be the best way to determine a good value (in bytes) for&amp;nbsp;maxSendQSize to avoid the slow indexer scenario?&lt;/P&gt;</description>
      <pubDate>Thu, 09 May 2024 04:45:39 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Knowledge-Management/Slow-indexer-receiver-detection-capability/m-p/686954#M10022</guid>
      <dc:creator>gjanders</dc:creator>
      <dc:date>2024-05-09T04:45:39Z</dc:date>
    </item>
    <item>
      <title>Re: Slow indexer/receiver detection capability</title>
      <link>https://community.splunk.com/t5/Knowledge-Management/Slow-indexer-receiver-detection-capability/m-p/687015#M10024</link>
      <description>&lt;P&gt;If Warningcount is high, then I would like to see if target receiver/indexer is putting back-pressure. Check if queues blocked on target. If queues not blocked, check on target using netstat&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;netstat -an|grep &amp;lt;splunktcp port&amp;gt;&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;and see RECV Q, if it's high. If receiver queues are not blocked, but netstat shows RECV Q is full, then receiver need additional pipelines.&lt;BR /&gt;&lt;BR /&gt;If&amp;nbsp;Warningcount is high because there was rolling restart at indexing tier, then set maxSendQSize to some 5% value of maxQueueSize.&lt;BR /&gt;Example&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;maxSendQSize=2000000
maxQueueSize=50MB&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;If using autoLBVolume, then have&lt;BR /&gt;&lt;BR /&gt;maxQueueSize &amp;gt; 5 x&amp;nbsp;autoLBVolume&lt;BR /&gt;autoLBVolume &amp;gt; maxSendQSize&lt;BR /&gt;Example&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;maxQueueSize=50MB
autoLBVolume=5000000
maxSendQSize=2000000&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;maxSendQSize is total outstanding raw size of events/chunks in connection queue that needs to be sent to TCP&amp;nbsp;&lt;SPAN class=""&gt;Send-Q. This happens generally when TCP Send-Q is already full.&lt;BR /&gt;&lt;BR /&gt;&lt;/SPAN&gt;autoLBVolume is minimum total raw size of events/chunks to be sent to a connection.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 09 May 2024 11:04:12 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Knowledge-Management/Slow-indexer-receiver-detection-capability/m-p/687015#M10024</guid>
      <dc:creator>hrawat</dc:creator>
      <dc:date>2024-05-09T11:04:12Z</dc:date>
    </item>
    <item>
      <title>Re: Slow indexer/receiver detection capability</title>
      <link>https://community.splunk.com/t5/Knowledge-Management/Slow-indexer-receiver-detection-capability/m-p/687117#M10026</link>
      <description>&lt;P&gt;Thankyou very much for the detailed reply, that gives me enough to action now.&lt;/P&gt;&lt;P&gt;I appreciate the contributions to the community in this way.&lt;/P&gt;</description>
      <pubDate>Fri, 10 May 2024 00:46:42 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Knowledge-Management/Slow-indexer-receiver-detection-capability/m-p/687117#M10026</guid>
      <dc:creator>gjanders</dc:creator>
      <dc:date>2024-05-10T00:46:42Z</dc:date>
    </item>
    <item>
      <title>Re: Slow indexer/receiver detection capability</title>
      <link>https://community.splunk.com/t5/Knowledge-Management/Slow-indexer-receiver-detection-capability/m-p/687803#M10036</link>
      <description>&lt;P&gt;One minor request, if this logging is ever enhanced can it please include the output group name.&lt;/P&gt;&lt;P&gt;&lt;SPAN class=""&gt;05-16-2024&lt;/SPAN&gt; &lt;SPAN class=""&gt;03:18:05.992&lt;/SPAN&gt;&lt;SPAN&gt; +&lt;/SPAN&gt;&lt;SPAN class=""&gt;0000&lt;/SPAN&gt; &lt;SPAN class=""&gt;WARN&lt;/SPAN&gt; &lt;SPAN class=""&gt;AutoLoadBalancedConnectionStrategy&lt;/SPAN&gt;&lt;SPAN&gt; [&lt;/SPAN&gt;&lt;SPAN class=""&gt;85268&lt;/SPAN&gt; &lt;SPAN class=""&gt;TcpOutEloop&lt;/SPAN&gt;&lt;SPAN&gt;] &lt;/SPAN&gt;&lt;SPAN class=""&gt;-&lt;/SPAN&gt; &lt;SPAN class=""&gt;Current&lt;/SPAN&gt; &lt;SPAN class=""&gt;dest&lt;/SPAN&gt; &lt;SPAN class=""&gt;host&lt;/SPAN&gt; &lt;SPAN class=""&gt;connection&lt;/SPAN&gt; &lt;SPAN class=""&gt;&amp;lt;ip address&amp;gt;:9997&lt;/SPAN&gt;&lt;SPAN&gt;, &lt;/SPAN&gt;&lt;SPAN class=""&gt;oneTimeClient=0&lt;/SPAN&gt;&lt;SPAN&gt;, &lt;/SPAN&gt;&lt;SPAN class=""&gt;_events.size&lt;/SPAN&gt;&lt;SPAN&gt;()&lt;/SPAN&gt;&lt;SPAN class=""&gt;=56156&lt;/SPAN&gt;&lt;SPAN&gt;, &lt;/SPAN&gt;&lt;SPAN class=""&gt;_refCount=1&lt;/SPAN&gt;&lt;SPAN&gt;, &lt;/SPAN&gt;&lt;SPAN class=""&gt;_waitingAckQ.size&lt;/SPAN&gt;&lt;SPAN&gt;()&lt;/SPAN&gt;&lt;SPAN class=""&gt;=0&lt;/SPAN&gt;&lt;SPAN&gt;, &lt;/SPAN&gt;&lt;SPAN class=""&gt;_supportsACK=0&lt;/SPAN&gt;&lt;SPAN&gt;, &lt;/SPAN&gt;&lt;SPAN class=""&gt;_lastHBRecvTime=Thu&lt;/SPAN&gt; &lt;SPAN class=""&gt;May&lt;/SPAN&gt; &lt;SPAN class=""&gt;16&lt;/SPAN&gt; &lt;SPAN class=""&gt;03:18:03&lt;/SPAN&gt; &lt;SPAN class=""&gt;2024&lt;/SPAN&gt; &lt;SPAN class=""&gt;is&lt;/SPAN&gt; &lt;SPAN class=""&gt;using&lt;/SPAN&gt; &lt;SPAN class=""&gt;31477941&lt;/SPAN&gt; &lt;SPAN class=""&gt;bytes.&lt;/SPAN&gt; &lt;SPAN class=""&gt;Total&lt;/SPAN&gt; &lt;SPAN class=""&gt;tcpout&lt;/SPAN&gt; &lt;SPAN class=""&gt;queue&lt;/SPAN&gt; &lt;SPAN class=""&gt;size&lt;/SPAN&gt; &lt;SPAN class=""&gt;is&lt;/SPAN&gt; &lt;SPAN class=""&gt;31457280.&lt;/SPAN&gt; &lt;SPAN class=""&gt;Warningcount=1001&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN class=""&gt;Is helpful, however the destination IP happens to be istio (K8s software load balancer) and I have 3 indexer clusters with different DNS names on the same IP/port (the incoming DNS name determines which backend gets used). So my only way to "guess" the outputs.conf stanza involved is to set a unique queue size for each one so I can determine which indexer cluster / output stanza is having the high warning count.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN class=""&gt;If it had tcpout=&amp;lt;stanzaname&amp;gt; or similar in the warning that would be very helpful for me.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN class=""&gt;Thanks&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 17 May 2024 00:17:20 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Knowledge-Management/Slow-indexer-receiver-detection-capability/m-p/687803#M10036</guid>
      <dc:creator>gjanders</dc:creator>
      <dc:date>2024-05-17T00:17:20Z</dc:date>
    </item>
    <item>
      <title>Re: Slow indexer/receiver detection capability</title>
      <link>https://community.splunk.com/t5/Knowledge-Management/Slow-indexer-receiver-detection-capability/m-p/688151#M10058</link>
      <description>&lt;P&gt;That's great feedback. We will add output group.&lt;/P&gt;</description>
      <pubDate>Tue, 21 May 2024 12:16:18 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Knowledge-Management/Slow-indexer-receiver-detection-capability/m-p/688151#M10058</guid>
      <dc:creator>hrawat</dc:creator>
      <dc:date>2024-05-21T12:16:18Z</dc:date>
    </item>
    <item>
      <title>Re: Slow indexer/receiver detection capability</title>
      <link>https://community.splunk.com/t5/Knowledge-Management/Slow-indexer-receiver-detection-capability/m-p/688155#M10059</link>
      <description>&lt;P&gt;Is this actual WARN log message you found?&lt;BR /&gt;&lt;BR /&gt;If yes,&amp;nbsp; what was the reason for back-pressure?&lt;/P&gt;</description>
      <pubDate>Tue, 21 May 2024 12:25:24 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Knowledge-Management/Slow-indexer-receiver-detection-capability/m-p/688155#M10059</guid>
      <dc:creator>hrawat</dc:creator>
      <dc:date>2024-05-21T12:25:24Z</dc:date>
    </item>
    <item>
      <title>Re: Slow indexer/receiver detection capability</title>
      <link>https://community.splunk.com/t5/Knowledge-Management/Slow-indexer-receiver-detection-capability/m-p/688239#M10064</link>
      <description>&lt;P&gt;Yes that's the actual WARN message, the worst I've seen is a warning count of 9001 with a 150MB queue, the forwarder itself forwards a peak of over 100MB/s&lt;/P&gt;&lt;P&gt;05-21-2024 18:48:47.099 +1000 WARN AutoLoadBalancedConnectionStrategy [264180 TcpOutEloop] - Current dest host connection 10.x.x.x:9997, oneTimeClient=0, _events.size()=131822, _refCount=1, _waitingAckQ.size()=0, _supportsACK=0, _lastHBRecvTime=Tue May 21 18:48:36 2024 is using 157278423 bytes. Total tcpout queue size is 157286400. Warningcount=9001&lt;/P&gt;&lt;P&gt;That went from Warningcount=1 at &lt;SPAN class=""&gt;18:48:38&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;538&lt;/SPAN&gt; to Warningcount=1001 at 18:48:38.771&lt;BR /&gt;Then 18:48:38.90 has 2001&lt;BR /&gt;18:48:39.033 has 3001&lt;BR /&gt;18:48:39.134 has 4001&lt;BR /&gt;18:48:39.200 has 5001&lt;BR /&gt;18:48:39.336 has 6001&lt;BR /&gt;18:48:39.553 has 7001&lt;BR /&gt;18:48:46.500 has 8001 and finally:&lt;BR /&gt;18:48:47.099 has 9001&lt;/P&gt;&lt;P&gt;I suspect the backpressure is caused by an istio pod failure in K8s. I haven't tracked down the cause but I've seen some cases where the istio ingress gateways pods in K8s are in a "not ready" state, however I suspect they were alive enough to take on traffic.&lt;/P&gt;&lt;P&gt;During this time period I will sometimes see higher than normal Warningcount= entries *and* often around the same time my website availability checks start failing to DNS names that are pointed to istio pods.&lt;/P&gt;&lt;P&gt;My current suspect is that's it's not just a Splunk-level backpressure but I'll keep investigating (at the time the indexing tier shows the most utilised TCP input queues were at 67% using a max() measurement on their metrics.log.&lt;/P&gt;&lt;P&gt;The vast majority of my Warningcount= entries on this forwarder show a value of 1.&lt;/P&gt;&lt;P&gt;The configuration for this instance is:&lt;/P&gt;&lt;PRE&gt;maxQueueSize = 150MB&lt;BR /&gt;autoLBVolume = 10485760&lt;BR /&gt;autoLBFrequency = 1&lt;BR /&gt;&lt;BR /&gt;dnsResolutionInterval = 259200&lt;BR /&gt;# tweaks the connectionsPerTarget = 2 * approx number of indexers&lt;BR /&gt;connectionsPerTarget = 96&lt;BR /&gt;# As per NLB tuning&lt;BR /&gt;heartbeatFrequency = 10&lt;BR /&gt;connectionTTL = 75&lt;BR /&gt;connectionTimeout = 10&lt;BR /&gt;&lt;BR /&gt;autoLBFrequency = 1&lt;BR /&gt;maxSendQSize = 400000&lt;BR /&gt;&lt;BR /&gt;# default 30 seconds, we can retry more quickly with istio as we should move to a new instance if it goes down&lt;BR /&gt;backoffOnFailure = 5&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The maxSendQSize was tuned for a much lower volume forwarder and I forgot to update it for this instance, so I will increase that, and this instance appears to have increased from 30-50MB/s to closer to 100MB/s so I'll increase the autoLBVolume setting as well&lt;/P&gt;</description>
      <pubDate>Wed, 22 May 2024 05:45:18 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Knowledge-Management/Slow-indexer-receiver-detection-capability/m-p/688239#M10064</guid>
      <dc:creator>gjanders</dc:creator>
      <dc:date>2024-05-22T05:45:18Z</dc:date>
    </item>
    <item>
      <title>Re: Slow indexer/receiver detection capability</title>
      <link>https://community.splunk.com/t5/Knowledge-Management/Slow-indexer-receiver-detection-capability/m-p/741150#M10384</link>
      <description>&lt;P&gt;Now logging includes output group from 9.4&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;03-06-2025 11:38:16.306 +1300 WARN AutoLoadBalancedConnectionStrategy [199656 TcpOutEloop] - Current dest host connection=10.231.218.59:9997, connid=1, oneTimeClient=0, _events.size()=53605, _refCount=1, _waitingAckQ.size()=0, _supportsACK=0, _lastHBRecvTime=Fri Mar 6 11:35:48 2025 for group=myindexers is using 31391960 bytes. Total tcpout queue size is 31457280. Warningcount=0&lt;/LI-CODE&gt;</description>
      <pubDate>Fri, 07 Mar 2025 16:56:26 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Knowledge-Management/Slow-indexer-receiver-detection-capability/m-p/741150#M10384</guid>
      <dc:creator>hrawat</dc:creator>
      <dc:date>2025-03-07T16:56:26Z</dc:date>
    </item>
    <item>
      <title>Re: Slow indexer/receiver detection capability</title>
      <link>https://community.splunk.com/t5/Knowledge-Management/Slow-indexer-receiver-detection-capability/m-p/743998#M10393</link>
      <description>&lt;P class="lia-align-left"&gt;The AutoLoadBalancedConnectionStrategy message contains several fields&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="python"&gt;oneTimeClient=0
_events.size()=20
_refCount=2
_waitingAckQ.size()=4
Warningcount=20&lt;/LI-CODE&gt;&lt;P class="lia-align-left"&gt;What do these fields mean and at what values do we need to be concerned?&lt;/P&gt;</description>
      <pubDate>Fri, 11 Apr 2025 01:08:08 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Knowledge-Management/Slow-indexer-receiver-detection-capability/m-p/743998#M10393</guid>
      <dc:creator>jstratton</dc:creator>
      <dc:date>2025-04-11T01:08:08Z</dc:date>
    </item>
    <item>
      <title>Re: Slow indexer/receiver detection capability</title>
      <link>https://community.splunk.com/t5/Knowledge-Management/Slow-indexer-receiver-detection-capability/m-p/744015#M10395</link>
      <description>&lt;LI-CODE lang="python"&gt;oneTimeClient=0 (regular connection to destination)
_events.size()=20 (outstanding events/chunks to be sent)
_refCount=2 (2 means useAck is enabled)
_waitingAckQ.size()=4 ( outstanding events/chunks still not acknowledged by target)
Warningcount=20 ( how many time this log is logged for this connection)&lt;/LI-CODE&gt;</description>
      <pubDate>Fri, 11 Apr 2025 13:23:19 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Knowledge-Management/Slow-indexer-receiver-detection-capability/m-p/744015#M10395</guid>
      <dc:creator>hrawat</dc:creator>
      <dc:date>2025-04-11T13:23:19Z</dc:date>
    </item>
  </channel>
</rss>

