Getting Data In

Warnings

yash_eng
New Member

Hey mates, I'm new to Splunk and while ingesting the data from my local machine to Splunk this message shows up.

"The TCP output processor has paused the data flow. Forwarding to host_dest=192.XXX.X.XX inside output group default-auto lb-group from host_src=MRNOOXX has been blocked for blocked_seconds=10. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data."

Kindly help me.

Thank you 

Labels (3)
0 Karma

LAME-Creations
Path Finder

The two previous posts both are good answers, but since you stated you are new to Splunk I decided to give you a thorough write up that explains how to check each of these areas that have been called out to see if they are the problem that is causing your error.  

The warning you’re seeing (The TCP output processor has paused the data flow) means your forwarder (at MRNOOXX) is unable to send data to the receiving Splunk instance (at 192.XXX.X.XX), likely because the receiver is not accepting data or the connection is blocked. This can stall data indexing, so let’s troubleshoot it step-by-step.
Here’s a comprehensive checklist to resolve the issue:
  1. Verify Receiver is Running:
    • Ensure the Splunk instance at 192.XXX.X.XX (likely an indexer) is active. On the receiver, run $SPLUNK_HOME/bin/splunk status to confirm splunkd is running.
    • If it’s stopped, restart it with $SPLUNK_HOME/bin/splunk restart.
  2. Confirm Receiving Port is Open:
    • The default port for Splunk-to-Splunk forwarding is 9997. On the receiver, check if port 9997 is listening: netstat -an | grep 9997 (Linux) or netstat -an | findstr 9997 (Windows).
    • Verify the receiver’s inputs.conf has a [splunktcp://9997] stanza. Run $SPLUNK_HOME/bin/splunk cmd btool inputs list splunktcp --debug to check. Ensure disabled = 0.
  3. Test Network Connectivity:
    • From the forwarder, test connectivity to the receiver’s port 9997: nc -vz -w1 192.XXX.X.XX 9997 (Linux) or telnet 192.XXX.X.XX 9997 (Windows). If it fails, check for firewalls or network issues.
    • Confirm no firewalls are blocking port 9997 on the receiver or network path.
  4. Check Forwarder Configuration:
    • On the forwarder, verify outputs.conf points to the correct receiver IP and port. Check $SPLUNK_HOME/etc/system/local/outputs.conf or app-specific configs (e.g., $SPLUNK_HOME/etc/apps/<app>/local/outputs.conf). Example:
      ini
       
      [tcpout:default-autolb-group]
      server = 192.XXX.X.XX:9997
      disabled = 0
    • Ensure no conflicting outputs.conf files exist (run $SPLUNK_HOME/bin/splunk cmd btool outputs list --debug).
  5. Inspect Receiver Health:
    • The error suggests the indexer may be overwhelmed, causing backpressure. Use the Splunk Monitoring Console (on a Search Head or standalone instance) to check:
      • Go to Monitoring Console > Indexing > Queue Throughput to see if queues (e.g., parsing, indexing) are full (100% fill ratio).
      • Check Resource Usage > Machine for CPU, memory, and disk I/O (IOPS) on the indexer. High usage may indicate bottlenecks.
    • Run this search on the Search Head to check queue status: | rest /services/server/introspection/queues splunk_server=192.XXX.X.XX | table title, current_size, max_size, fill_percentage.
    • Ensure the indexer has sufficient disk space (df -h on Linux or dir on Windows) and isn’t exceeding license limits (check Monitoring Console > Licensing).
  6. Check for SSL Mismatches:
    • If SSL is enabled (e.g., useSSL = true in outputs.conf on the forwarder), ensure the receiver’s inputs.conf has ssl = true. Verify certificates match in $SPLUNK_HOME/etc/auth/ on both systems.
    • Check splunkd.log on the receiver for SSL errors: grep -i ssl $SPLUNK_HOME/var/log/splunk/splunkd.log.
  7. Review Logs for Clues:
    • On the forwarder, check $SPLUNK_HOME/var/log/splunk/splunkd.log for errors around the TCP warning (search for “TcpOutputProc” or “blocked”). Look for queue or connection errors.
    • On the receiver, search splunkd.log for errors about queue fullness, indexing delays, or connection refusals (e.g., grep -i "192.XXX.X.XX" $SPLUNK_HOME/var/log/splunk/splunkd.log).
    • Share any relevant errors to help narrow it down.
  8. Proactive Mitigation:
    • If the issue is intermittent (e.g., due to temporary indexer overload), consider enabling persistent queues on the forwarder to buffer data during blockages. In outputs.conf:
      ini
       
      [tcpout]
      maxQueueSize = 100MB
      usePersistentQueue = true
    • Restart the forwarder after changes.
  9. Architecture and Version Details:
    • Could you share:
      • Your Splunk version (e.g., 9.3.1)? Run $SPLUNK_HOME/bin/splunk version.
      • Your setup (e.g., Universal Forwarder to single indexer, or Heavy Forwarder to indexer cluster)?
      • Is the receiver a standalone indexer, Splunk Cloud, or part of a cluster?
    • This will help tailor the solution, as queue behaviors vary by version and architecture.
Quick Fixes to Try:
  • Restart both the forwarder and receiver to clear temporary issues: $SPLUNK_HOME/bin/splunk restart.
  • Simplify outputs.conf on the forwarder to point to one indexer (e.g., server = 192.XXX.X.XX:9997) and test.
  • Check indexer disk space and license usage immediately, as these are common culprits.
Next Steps:
  • Share the output of the network test (nc or telnet), any splunkd.log errors, and your architecture details.
  • If you have access to the Monitoring Console, let us know the queue fill percentages or resource usage metrics.
 

richgalloway
SplunkTrust
SplunkTrust

For some reason, Splunk has stopped receiving data.  It could be because of any of several reasons.  Check the logs on indexer for possible explanations.  Also, the Monitoring Console may offer clues - look for blocked indexer queues.

---
If this reply helps you, Karma would be appreciated.
0 Karma

livehybrid
Super Champion

Hi @yash_eng 

This warning indicates your forwarder cannot send data to the receiving Splunk instance at 192.XXX.X.XX because the connection is blocked or the receiver is not accepting data.

I'd recommend checking the following:

  1. Verify the receiver is running - Ensure the Splunk instance at 192.XXX.X.XX is active and accessible
  2. Confirm receiving port is open - Default is 9997 for Splunk-to-Splunk forwarding - can you confirm this is listening on the receiving system?
  3. Check network connectivity - Test if you can reach the destination IP from your forwarder machine - Can you perform a netcat check (e.g. nc -vz -w1 192.x.x.x 9997) to prove you can connect from source to destination?
  4. Verify receiver configuration - Ensure the receiving Splunk instance has inputs configured to accept data on the expected port. You can use btool with "$SPLUNK_HOME/bin/splunk cmd btool inputs list splunktcp"

Can you give some more inforomation on your architecture / deployment setup? This might help pinpoint the possible issue, some common issues include; Receiver Splunk service is down, Firewall blocking the connection, Incorrect receiving port configuration, Network connectivity issues, Receiver disk space full or other resource constraints or SSL misconfiguration - if you're able to show us additional logs around the other errors this might also help.

🌟 Did this answer help you? If so, please consider:

  • Adding karma to show it was useful
  • Marking it as the solution if it resolved your issue
  • Commenting if you need any clarification

Your feedback encourages the volunteers in this community to continue contributing

0 Karma
Get Updates on the Splunk Community!

Deep Dive into Federated Analytics: Unlocking the Full Power of Your Security Data

In today’s complex digital landscape, security teams face increasing pressure to protect sprawling data across ...

Your summer travels continue with new course releases

Summer in the Northern hemisphere is in full swing, and is often a time to travel and explore. If your summer ...

From Alert to Resolution: How Splunk Observability Helps SREs Navigate Critical ...

It's 3:17 AM, and your phone buzzes with an urgent alert. Wire transfer processing times have spiked, and ...