Hi Everyone,
I've installed and configured a Splunk Heavy Forwarder on an EC2 instance in AWS and configured two Splunk Indexers on EC2 instances in AWS. I created a test.log file on my HF with sample log events to forward them to my Splunk indexers. I'm trying to forward the logs/events with keyword "success" to indexer_1 and forward logs/events with keyword "error" to indexer_2.
But, for some reason the logs/events from the HF are not visible in both Indexers. Just for the context, I have installed and configured a UF on another EC2 Instance in AWS and sending data to Indexer_1 and I can see the data successfully forwarded with no issues.
Below are the .conf files and setup on my HF and two indexers.
HF:
inputs.conf:
[monitor:///opt/splunk/var/log/splunk/test.log]
disabled = false
sourcetype = test
outputs.conf:
[tcpout:errorGroup]
server = indexr_1_ip_addr:9997
[tcpout:successGroup]
server = indexer_2_ip_addr:9997
props.conf:
[test]
TRANSFORMS-routing=errorRouting,successRouting
transforms.conf:
[errorRouting]
REGEX=error
DEST_KEY=_TCP_ROUTING
FORMAT=errorGroup
[successRouting]
REGEX=success
DEST_KEY=_TCP_ROUTING
FORMAT=successGroup
Indexer_1 & Indexer_2:
Configured the port 9997 on both indexers.
Note: I tried below steps to troubleshoot or identify the issue, but no luck so far:
1. Checked if the forwarder has any inactive forwards or receivers through CLI:
Active forwards:
indexr_1_ip_addr:9997
indexr_2_ip_addr:9997
Configured but inactive forwards:
None
2. Check the splunkd.log on the forwarder to see if there are any errors related to data forwarding: No errors
3. Checked the Security Group rules (Inbound and Outbound) in AWS console: Port 9997 is enabled for both Inbound and Outbound traffic.
4. All EC2 Instances running Splunk are on the same Security Group in AWS.
5. Tried to Ping both Indexers from HF. But, no response.
Can someone please help me with this issue as I'm stuck and unable to figure out what is the root cause of the issue. Also, I'm using the same security group for both HF and UF with same Inbound and Outbound rules, but I can only see the logs sent from UF and not seeing the logs/events from my HF. I'm not sure what I am missing here to resolve or fix the issue to see the logs/events from HF in my Indexers.
Thank you!
@kiran_panchavat Thank you for those steps and suggestions:
I tried those steps and below are the details:
netstat -tulnp | grep 9997 OR ss -tulnp | grep 9997
Ran the above command in on my HF:
1. First it said:
grep: invalid option -- 't'
Usage: grep [OPTION]... PATTERN [FILE]...
Try 'grep --help' for more information.
2. Then when I tried to only grep for 9997 (netstat -tulnp | grep 9997) I did not see any output.
tail -f /opt/splunk/var/log/splunk/metrics.log | grep -i "blocked=true"
[tcpout:errorGroup]
server = indexr_1_ip_addr:9997
[tcpout:successGroup]
server = indexer_2_ip_addr:9997
--> it means the indexers are NOT listening for incoming data. This could mean, The HF not configured to listen on port 9997. Network issues preventing the HF from binding to port 9997.
--> Your outputs.conf look correct:
[tcpout:errorGroup]
server=indexr_1_ip_addr:9997
[tcpout:successGroup]
server=indexer_2_ip_addr:9997
--> The file permissions for /opt/splunk/var/log/splunk/test.log seem correct. However, ensure that the Splunk process has the necessary permissions to read the file. You can check the Splunk user running the HF and adjust permissions accordingly.
tail -n 100 /opt/splunk/var/log/splunk/splunkd.log | grep -i "ERROR"
tail -n 100 /opt/splunk/var/log/splunk/splunkd.log | grep -i "WARN"
Go to cd /opt/splunk/etc/system/local
vi inputs.conf
[splunktcp://9997]
disabled = 0
Restart Splunk.
Hi @kiran_panchavat actually I accidentally terminated my ec2 instances in AWS and had to re launch them and re-install Splunk from scratch on all those instances and once I set them up and configured the event routing to different Splunk receivers from my Heavy Forwarder I was able to see a specifc group of logs/events are sent to one of my Splunk receivers which is expected. I still could not see the data in my other Splunk receiver but I guess I just need to double check my configuration since it is working fine with one of the servers.
Also, thank you for your time in guiding me through those steps to troubleshoot the issue.
Thanks for the update. It’s great that one of your Splunk receivers is now getting the logs as expected. Since the other receiver still isn’t showing data, I’d recommend a quick review of its configuration to see if there’s a missing or misconfigured detail. If the steps were helpful and you resolve the issue, feel free to accept the solution. Thanks again for your update!
netstat -tulnp | grep 9997 OR ss -tulnp | grep 9997
tail -f /opt/splunk/var/log/splunk/metrics.log | grep -i "blocked=true"