Monitoring Splunk

I recently installed the Universal forwarder in the local Machine, but I cannot see the windows logs sent to the indexes

MrBLeu
Loves-to-Learn

01-09-2025 17:01:37.725 -0500 WARN  TcpOutputProc [4940 parsing] - The TCP output processor has paused the data flow. Forwarding to host_dest=sbdcrib.splunkcloud.com inside output group default-autolb-group from host_src=CRBCITDHCP-01 has been blocked for blocked_seconds=1800. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data.

Labels (1)
0 Karma

isoutamo
SplunkTrust
SplunkTrust
It seems that your target is SCP environment. Are you using SCP’s Universal Forwarder package from SCP? Based on those server names you have something else than AWS Victoria experience in use or otherwise you have wrong outputs.conf in use.
0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @MrBLeu ,

from your description I see that you configured your UF to send logs (using outputs.conf) and I suppose that you configured Indexer to receive logs.

If not go in [Settings > Forwarding and Receiving > Forwarding ] and configure the receiving port to use in the UF in outputs.conf.

Then, did your connection work anytime or not?

If never, check the connection using telnet from the UF to the IDX using the receivig port (by default 9997)

telnet <ip_IDX> 9997

 Ciao.

Giuseppe

0 Karma

kiran_panchavat
SplunkTrust
SplunkTrust

@MrBLeu  Hey,  The servers configured in outputs.conf are not performing well. there could be many reasons:

- From the remote server, make sure you can reach the port on the indexer. Telnet or something
- Review the Splunkd logs on the windows server, grepping for the indexer ip
- Make sure it's listening on 9997, ss -l | grep 9997
- Check the logs on the Universal forwarder $SPLUNK_HOME/var/log/splunk/splunkd.log
- network issue from Universal forwarder to Indexer
- Indexers are overwhelmed with events coming in or busy in serving requests from search head.
- check all servers (indexers) in outputs.conf of forwarder are healthy (CPU and memory utilization).
- Check if you have deployed outputs.conf to indexers by mistake. generally indexers don't have outputs.conf.

I hope this helps, if any reply helps you, you could add your upvote/karma points to that reply, thanks.

Did this help? If yes, please consider giving kudos, marking it as the solution, or commenting for clarification — your feedback keeps the community going!
0 Karma
Get Updates on the Splunk Community!

Data Management Digest – December 2025

Welcome to the December edition of Data Management Digest! As we continue our journey of data innovation, the ...

Index This | What is broken 80% of the time by February?

December 2025 Edition   Hayyy Splunk Education Enthusiasts and the Eternally Curious!    We’re back with this ...

Unlock Faster Time-to-Value on Edge and Ingest Processor with New SPL2 Pipeline ...

Hello Splunk Community,   We're thrilled to share an exciting update that will help you manage your data more ...