All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

And why would you go for 9.0 which is out of support? I'd strongly advise against that. Unless you have a very good reason for doing so (and a very very good support contract, other than us, mere mor... See more...
And why would you go for 9.0 which is out of support? I'd strongly advise against that. Unless you have a very good reason for doing so (and a very very good support contract, other than us, mere mortals) it's unwise to keep your environment at an unsupported version (which applies to the current 8.2 as well).
The two previous posts both are good answers, but since you stated you are new to Splunk I decided to give you a thorough write up that explains how to check each of these areas that have been called... See more...
The two previous posts both are good answers, but since you stated you are new to Splunk I decided to give you a thorough write up that explains how to check each of these areas that have been called out to see if they are the problem that is causing your error.   The warning you’re seeing (The TCP output processor has paused the data flow) means your forwarder (at MRNOOXX) is unable to send data to the receiving Splunk instance (at 192.XXX.X.XX), likely because the receiver is not accepting data or the connection is blocked. This can stall data indexing, so let’s troubleshoot it step-by-step. Here’s a comprehensive checklist to resolve the issue: Verify Receiver is Running: Ensure the Splunk instance at 192.XXX.X.XX (likely an indexer) is active. On the receiver, run $SPLUNK_HOME/bin/splunk status to confirm splunkd is running. If it’s stopped, restart it with $SPLUNK_HOME/bin/splunk restart. Confirm Receiving Port is Open: The default port for Splunk-to-Splunk forwarding is 9997. On the receiver, check if port 9997 is listening: netstat -an | grep 9997 (Linux) or netstat -an | findstr 9997 (Windows). Verify the receiver’s inputs.conf has a [splunktcp://9997] stanza. Run $SPLUNK_HOME/bin/splunk cmd btool inputs list splunktcp --debug to check. Ensure disabled = 0. Test Network Connectivity: From the forwarder, test connectivity to the receiver’s port 9997: nc -vz -w1 192.XXX.X.XX 9997 (Linux) or telnet 192.XXX.X.XX 9997 (Windows). If it fails, check for firewalls or network issues. Confirm no firewalls are blocking port 9997 on the receiver or network path. Check Forwarder Configuration: On the forwarder, verify outputs.conf points to the correct receiver IP and port. Check $SPLUNK_HOME/etc/system/local/outputs.conf or app-specific configs (e.g., $SPLUNK_HOME/etc/apps/<app>/local/outputs.conf). Example: ini   [tcpout:default-autolb-group] server = 192.XXX.X.XX:9997 disabled = 0 Ensure no conflicting outputs.conf files exist (run $SPLUNK_HOME/bin/splunk cmd btool outputs list --debug). Inspect Receiver Health: The error suggests the indexer may be overwhelmed, causing backpressure. Use the Splunk Monitoring Console (on a Search Head or standalone instance) to check: Go to Monitoring Console > Indexing > Queue Throughput to see if queues (e.g., parsing, indexing) are full (100% fill ratio). Check Resource Usage > Machine for CPU, memory, and disk I/O (IOPS) on the indexer. High usage may indicate bottlenecks. Run this search on the Search Head to check queue status: | rest /services/server/introspection/queues splunk_server=192.XXX.X.XX | table title, current_size, max_size, fill_percentage. Ensure the indexer has sufficient disk space (df -h on Linux or dir on Windows) and isn’t exceeding license limits (check Monitoring Console > Licensing). Check for SSL Mismatches: If SSL is enabled (e.g., useSSL = true in outputs.conf on the forwarder), ensure the receiver’s inputs.conf has ssl = true. Verify certificates match in $SPLUNK_HOME/etc/auth/ on both systems. Check splunkd.log on the receiver for SSL errors: grep -i ssl $SPLUNK_HOME/var/log/splunk/splunkd.log. Review Logs for Clues: On the forwarder, check $SPLUNK_HOME/var/log/splunk/splunkd.log for errors around the TCP warning (search for “TcpOutputProc” or “blocked”). Look for queue or connection errors. On the receiver, search splunkd.log for errors about queue fullness, indexing delays, or connection refusals (e.g., grep -i "192.XXX.X.XX" $SPLUNK_HOME/var/log/splunk/splunkd.log). Share any relevant errors to help narrow it down. Proactive Mitigation: If the issue is intermittent (e.g., due to temporary indexer overload), consider enabling persistent queues on the forwarder to buffer data during blockages. In outputs.conf: ini   [tcpout] maxQueueSize = 100MB usePersistentQueue = true Restart the forwarder after changes. Architecture and Version Details: Could you share: Your Splunk version (e.g., 9.3.1)? Run $SPLUNK_HOME/bin/splunk version. Your setup (e.g., Universal Forwarder to single indexer, or Heavy Forwarder to indexer cluster)? Is the receiver a standalone indexer, Splunk Cloud, or part of a cluster? This will help tailor the solution, as queue behaviors vary by version and architecture. Quick Fixes to Try: Restart both the forwarder and receiver to clear temporary issues: $SPLUNK_HOME/bin/splunk restart. Simplify outputs.conf on the forwarder to point to one indexer (e.g., server = 192.XXX.X.XX:9997) and test. Check indexer disk space and license usage immediately, as these are common culprits. Next Steps: Share the output of the network test (nc or telnet), any splunkd.log errors, and your architecture details. If you have access to the Monitoring Console, let us know the queue fill percentages or resource usage metrics.  
For some reason, Splunk has stopped receiving data.  It could be because of any of several reasons.  Check the logs on indexer for possible explanations.  Also, the Monitoring Console may offer clues... See more...
For some reason, Splunk has stopped receiving data.  It could be because of any of several reasons.  Check the logs on indexer for possible explanations.  Also, the Monitoring Console may offer clues - look for blocked indexer queues.
Hi @yash_eng  This warning indicates your forwarder cannot send data to the receiving Splunk instance at 192.XXX.X.XX because the connection is blocked or the receiver is not accepting data. I'd re... See more...
Hi @yash_eng  This warning indicates your forwarder cannot send data to the receiving Splunk instance at 192.XXX.X.XX because the connection is blocked or the receiver is not accepting data. I'd recommend checking the following: Verify the receiver is running - Ensure the Splunk instance at 192.XXX.X.XX is active and accessible Confirm receiving port is open - Default is 9997 for Splunk-to-Splunk forwarding - can you confirm this is listening on the receiving system? Check network connectivity - Test if you can reach the destination IP from your forwarder machine - Can you perform a netcat check (e.g. nc -vz -w1 192.x.x.x 9997) to prove you can connect from source to destination? Verify receiver configuration - Ensure the receiving Splunk instance has inputs configured to accept data on the expected port. You can use btool with "$SPLUNK_HOME/bin/splunk cmd btool inputs list splunktcp" Can you give some more inforomation on your architecture / deployment setup? This might help pinpoint the possible issue, some common issues include; Receiver Splunk service is down, Firewall blocking the connection, Incorrect receiving port configuration, Network connectivity issues, Receiver disk space full or other resource constraints or SSL misconfiguration - if you're able to show us additional logs around the other errors this might also help.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi,  I am having an issue trying to make a version of the search app filtering timeline work in my dashboard in Dashboard Studio with other visualizations. I have set a token interaction on click to... See more...
Hi,  I am having an issue trying to make a version of the search app filtering timeline work in my dashboard in Dashboard Studio with other visualizations. I have set a token interaction on click to update the global_time.earliest value as to the time that is clicked on the chart. I, however, am running into an issue where I cannot set the global_time.latest value by clicking again on the timechart. If I set up a second token interaction to get the latest time, it just sets it to the same as the earliest, all on the first click. I'm trying to filter it down to each bar's representation on the timechart, which is 2 hours ( |timechart span=2h ...).  Like the search apps version, this timechart is meant to be a filtering tool that will only filter down the search times of the other visualizations once it is set. Setting the earliest token works perfectly fine; it's all just about the latest. I just need to know how or if it is possible. Thank you!!
Contact Splunk Support for versions not available on the web site.
Hey mates, I'm new to Splunk and while ingesting the data from my local machine to Splunk this message shows up. "The TCP output processor has paused the data flow. Forwarding to host_dest=192.XXX.X... See more...
Hey mates, I'm new to Splunk and while ingesting the data from my local machine to Splunk this message shows up. "The TCP output processor has paused the data flow. Forwarding to host_dest=192.XXX.X.XX inside output group default-auto lb-group from host_src=MRNOOXX has been blocked for blocked_seconds=10. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data." Kindly help me. Thank you 
Hi There, We've a standalone Splunk instance v8.2.2.1 deployed on a  RHEL server which is EOL; we wish to migrate to a newer OS Amazon Linux (AL) 2023 OS-- rather than  performing an in-place upgrad... See more...
Hi There, We've a standalone Splunk instance v8.2.2.1 deployed on a  RHEL server which is EOL; we wish to migrate to a newer OS Amazon Linux (AL) 2023 OS-- rather than  performing an in-place upgrade. Instead of using the most recent version of Splunk enterprise, we still wish to adopt a more conservative approach and choose 9.0.x (we've UFs that are older version 7.x and 8.x). Please let me know where can i download 9.0.x version of Splunk enterprise as it's not here: https://www.splunk.com/en_us/download/previous-releases.html   Thanks!
Hi @tscroggins  I have appended intermediate  and root cert to the cacert.pem .After this error is not observed.
As many mentioned on this post, even if I was able to get Splunk to read the log file it will end up with duplicate logs or I might lose events if the UF reads to slow. The solution is to write a cu... See more...
As many mentioned on this post, even if I was able to get Splunk to read the log file it will end up with duplicate logs or I might lose events if the UF reads to slow. The solution is to write a custom script that can handle the log behaviour of when it's "full" it starts overwriting the oldest event. This custom script allows Splunk to ingest events and can help handle the duplicate logs. As for loss of events by overwriting, I don't have a bullet proof solution beyond ensuring the events are ingested into Splunk faster than they are written. You should consider just using the script to tail the log and write a new log file, to aid in this if necessary.   Many thanks for the insights on UF behaviour for this wierd log.
Hi _olivier_, Yes, off course when on your server go to the monitoring console, there under the menu setting, select "general setup" and there you can set the server roles.    Kind regards. 
Hello @Satyams14, If you plan to stream WAF logs to Eventhubs and wish to use Splunk Supported Add-on, you can also consider using Splunk Add-on for Microsoft Cloudservices (#3110 - https://splunkba... See more...
Hello @Satyams14, If you plan to stream WAF logs to Eventhubs and wish to use Splunk Supported Add-on, you can also consider using Splunk Add-on for Microsoft Cloudservices (#3110 - https://splunkbase.splunk.com/app/3110). It is a supported add-on and can fetch logs directly from the eventhub. Thanks, Tejas.   --- If the above solution helps, an upvote is appreciated..!! 
Hi @_olivier_ , don't attach a new question on an old one, even if on the same topic: open a new request, so you will be more sure to receive an answer. Ciao. Giuseppe
Hi @Satyams14  This app is created by Splunk (but not a Splunk supported app) - not created by Microsoft, having said that I believe that it IS the "go-to" app for Azure feeds/onboarding. For a goo... See more...
Hi @Satyams14  This app is created by Splunk (but not a Splunk supported app) - not created by Microsoft, having said that I believe that it IS the "go-to" app for Azure feeds/onboarding. For a good overview on getting-data-in (GDI) for Azure check out https://docs.splunk.com/Documentation/SVA/current/Architectures/AzureGDI (which lists this app).  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Satyams14 , as you can read at https://splunkbase.splunk.com/app/3757, this isn't an official app by Splunk or Microsoft: It was created by "Splunk Works", It isn't supported, even if it has... See more...
Hi @Satyams14 , as you can read at https://splunkbase.splunk.com/app/3757, this isn't an official app by Splunk or Microsoft: It was created by "Splunk Works", It isn't supported, even if it has 64,900 downloads, and you can find it on GitHub. Ciao. Giuseppe
Hi, @hendriks ,  this is an old post, but can you remember the actions to add the indexserver role ?    Thanks.
Hello, Can someone confirm if this is official app by microsoft or a third party created app? I want to integrate azure waf logs into my splunk indexer.   Thanks and Regards, satyam
Hi @tanjil  As you are a Splunk Cloud customer you are entitled to a "0-byte" license which allows you to use non-indexing components without restriction (e.g. auth/kvstore/forwarding/accessing prev... See more...
Hi @tanjil  As you are a Splunk Cloud customer you are entitled to a "0-byte" license which allows you to use non-indexing components without restriction (e.g. auth/kvstore/forwarding/accessing previously indexed data etc etc) - Check out https://splunk.my.site.com/customer/s/article/0-byte-license-for-Deployment-Server-or-Heavy-Forwarder for more information.  Basically this is a perpetual 0-byte license so you can perform your usual HF/DS work. Just open a case via https://www.splunk.com/support and they should issue it pretty quickly.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
well... if im removing the table i see the entire event with the real structure, but i want to see only the testlogs.log part how can i do it ? using |fields does not help
1. Ok. You're searching by full json paths which probably means that you're using indexed extractions. This is generally Not Good (tm). 2. You're using the table command at the end. It creates a sum... See more...
1. Ok. You're searching by full json paths which probably means that you're using indexed extractions. This is generally Not Good (tm). 2. You're using the table command at the end. It creates a summary table which does not do any additional formating. You might try to do | fields logs | fields - _raw _time | rename logs as _raw instead of the table command and use event list widget instead of table but I'm not sure it will look good.