Hi @gcusello , Thank you for confirming about the approach. Yes, there is an issue that right after I configured the _TCP_ROUTING for the specific log sources (log paths) the groupB did not receive any events from the log sources even though the log files on the Syslog Server were being updated - the events / logs were received in groupB after some time. Further, for example, I have 50 different hosts sending logs in one log path on the Syslog Server which is being sent to both groupA and groupB - the groupB received logs from only 5 even though all 50 log files were being updated. Summary of Issues, 1. How to identify the reason for the delay in logs being received in groupB (configured via _TCP_ROUTING) 2. Is there a chance for logs to be missing? How to identify if this is the case? Steps taken to identify whether logs are being sent, I've tried to read the metrics.log file on the UF to see if any data is being sent to the groupB which showed that right after the configuration was applied no logs were being sent but after some time some logs were being sent, sample snippet pasted below, tail /opt/splunkforwarder/var/log/splunk/metrics.log | grep groupB
09-01-2022 13:59:32.194 +0500 INFO Metrics - group=queue, name=tcpout_groupB, max_size=7340032, current_size=2322132, largest_size=2322132, smallest_size=2322132
09-01-2022 13:59:32.194 +0500 INFO Metrics - group=tcpout_connections, name=groupB:x.x.x.x:9997:0, sourcePort=8089, destIp=x.x.x.x, destPort=9997, _tcp_Bps=3967.13, _tcp_KBps=3.87, _tcp_avg_thruput=3.87, _tcp_Kprocessed=116, _tcp_eps=0.10, kb=116.22, max_ackq_size=22020096, current_ackq_size=118360 I did find these from the health.log file pasted below which signify the issue might be with load - however I'm not sure about this one, if you can confirm about this, tail /opt/splunkforwarder/var/log/splunk/health.log
09-01-2022 13:46:35.112 +0500 INFO HealthChangeReporter - feature="TailReader-0" indicator="data_out_rate" previous_color=green color=yellow due_to_threshold_value=1 measured_value=1 reason="The monitor input cannot produce data because splunkd's processing queues are full. This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data."
09-01-2022 13:46:40.112 +0500 INFO HealthChangeReporter - feature="BatchReader-0" indicator="data_out_rate" previous_color=yellow color=red due_to_threshold_value=2 measured_value=2 reason="The monitor input cannot produce data because splunkd's processing queues are full. This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data."
09-01-2022 13:56:34.898 +0500 INFO HealthChangeReporter - feature="TailReader-0" indicator="data_out_rate" previous_color=red color=green measured_value=0
... View more