Splunk Enterprise

Heavy forwarder - Could not send data to output queue (parsingQueue)

bsrikanthreddy5
Path Finder

Hi 

I have started historical indexing by copying the .gz files on the HF. After that, I  am seeing below in splunkd.log

01-05-2021 18:43:00.728 -0500 WARN  TailReader - Could not send data to output queue (parsingQueue), retrying...

01-05-2021 18:43:01.039 -0500 WARN  TcpOutputProc - The TCP output processor has paused the data flow. Forwarding to output group p2s has been blocked for 10 seconds. This will probably stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data.

01-05-2021 18:43:06.013 -0500 WARN  TcpOutputProc - The TCP output processor has paused the data flow. Forwarding to output group p2s has been blocked for 10 seconds. This will probably stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data.

01-05-2021 18:43:11.049 -0500 WARN  TcpOutputProc - The TCP output processor has paused the data flow. Forwarding to output group p2s has been blocked for 20 seconds. This will probably stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data.

01-05-2021 18:43:20.032 -0500 WARN  TcpOutputProc - The TCP output processor has paused the data flow. Forwarding to output group p2s has been blocked for 10 seconds. This will probably stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data.

==> In metric.log  on HF

01-05-2021 18:47:08.734 -0500 INFO  Metrics - group=queue, ingest_pipe=1, name=indexqueue, blocked=true, max_size_kb=20480, current_size_kb=20479, current_size=7457, largest_size=7703, smallest_size=6737

01-05-2021 18:47:08.735 -0500 INFO  Metrics - group=queue, ingest_pipe=2, name=indexqueue, blocked=true, max_size_kb=20480, current_size_kb=20479, current_size=7443, largest_size=7482, smallest_size=6719

01-05-2021 18:47:08.735 -0500 INFO  Metrics - group=queue, ingest_pipe=2, name=typingqueue, blocked=true, max_size_kb=20480, current_size_kb=20479, current_size=7476, largest_size=7489, smallest_size=6735

01-05-2021 18:47:08.736 -0500 INFO  Metrics - group=queue, ingest_pipe=3, name=aggqueue, blocked=true, max_size_kb=1024, current_size_kb=1023, current_size=367, largest_size=415, smallest_size=0

01-05-2021 18:48:59.729 -0500 INFO  Metrics - group=queue, ingest_pipe=1, name=indexqueue, blocked=true, max_size_kb=20480, current_size_kb=20479, current_size=7676, largest_size=7703, smallest_size=6666

01-05-2021 18:48:59.730 -0500 INFO  Metrics - group=queue, ingest_pipe=3, name=aggqueue, blocked=true, max_size_kb=1024, current_size_kb=1023, current_size=357, largest_size=368, smallest_size=0

01-05-2021 18:52:03.732 -0500 INFO  Metrics - group=queue, ingest_pipe=0, name=indexqueue, blocked=true, max_size_kb=20480, current_size_kb=20479, current_size=7241, largest_size=7491, smallest_size=6542

01-05-2021 18:52:03.736 -0500 INFO  Metrics - group=queue, ingest_pipe=2, name=typingqueue, blocked=true, max_size_kb=20480, current_size_kb=20479, current_size=7468, largest_size=7478, smallest_size=6443

01-05-2021 18:52:03.737 -0500 INFO  Metrics - group=queue, ingest_pipe=3, name=aggqueue, blocked=true, max_size_kb=1024, current_size_kb=1023, current_size=360, largest_size=370, smallest_size=0

01-05-2021 18:55:01.732 -0500 INFO  Metrics - group=queue, ingest_pipe=0, name=indexqueue, blocked=true, max_size_kb=20480, current_size_kb=20479, current_size=7243, largest_size=7316, smallest_size=6545

01-05-2021 18:55:01.732 -0500 INFO  Metrics - group=queue, ingest_pipe=0, name=parsingqueue, blocked=true, max_size_kb=10240, current_size_kb=10239, current_size=1266, largest_size=1272, smallest_size=1030

01-05-2021 18:55:01.733 -0500 INFO  Metrics - group=queue, ingest_pipe=0, name=typingqueue, blocked=true, max_size_kb=20480, current_size_kb=20479, current_size=7238, largest_size=7323, smallest_size=6578

------
I have below setting on  HF . 

limits.conf
[thruput]
maxKBps = 0

Server.conf
[general]
parallelIngestionPipelines = 4

[queue]
maxSize = 20MB

[queue=parsingQueue]
maxSize = 10MB

My HF is on-prem  server and splunk indexer cluster  is on AWS . Can you please let me know way speed up my indexing .
 

Labels (1)
0 Karma

bsrikanthreddy5
Path Finder

After adding  below on the forwarder, the slow indexing issue was fixed. 

outputs.conf

[tcpout:p2s]

maxQueueSize = 7MB

0 Karma

scelikok
SplunkTrust
SplunkTrust

Hi @bsrikanthreddy5,

It seems either your indexers cannot index data fast enough or there is bandwidth/latency problem between your server and AWS indexers.

If the problem is bandwidth you can enable compression on your outputs.conf. This will make outputs using less bandwidth.

compressed = <boolean>
* If set to "true", the receiver communicates with the forwarder in
  compressed format.
* If set to "true", you do not need to set the 'compressed' setting to "true"
  in the inputs.conf file on the receiver for compression
  of data to occur.
* This setting applies to non-SSL forwarding only. For SSL forwarding,
  Splunk software uses the 'useClientSSLCompression' setting.
* Default: false

If this reply helps you an upvote is appreciated. 

If this reply helps you an upvote and "Accept as Solution" is appreciated.
0 Karma

bsrikanthreddy5
Path Finder

@scelikok 
Thanks  for replying,  I have checked bandwidth/latency issues, there are none,  in a test  I am able to send 5Gb of data in 60 seconds

[SUM] 0.00-60.00 sec 5.35 GBytes 766 Mbits/sec receiver
 
As per the monitoring console, I don't see any indexing issues
 

 

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...