I'm setting up forwarders on 4 servers collecting log files. 3 are running fine, sending the correct log files to the correct index. But one is sending only /opt/splunkforwarder/var/log/splunk/splunkd.log
to the _internal
index and nothing to the index I created. I have no idea what's going on and could use a little help.
All 4 servers are set up with the same basic information;
inputs.conf file
[default]
host = servername-location
# Requested log file
[monitor:///var/log/file.log]
disabled=false
index=myindex
_tzhint=America/Denver
outputs.conf file
[tcpout]
defaultGroup = default-autolb-group
[tcpout:default-autolb-group]
server = nn.nn.nn.nn:9997, nn.nn.nn.n1:9997
These are the only 2 files that I changed from the install on all of the forwarders.
I installed splunkforwarder-5.0.5-179365-Linux-x86_64 on these servers.
Any and all help would be greatly appreciated.
Sorry about the fire drill but here's what happened. Everything is working fine, and always was. The problem is the vendor decided to name all the appliances the same. Different IPs, but the same host name. The data was being sent to Splunk but everything ended up under the 1 hostName. I have another question opened on how to change the reporting hostName infomation in Splunk conf files. Thanks for everyone's help on this.
Sorry about the fire drill but here's what happened. Everything is working fine, and always was. The problem is the vendor decided to name all the appliances the same. Different IPs, but the same host name. The data was being sent to Splunk but everything ended up under the 1 hostName. I have another question opened on how to change the reporting hostName infomation in Splunk conf files. Thanks for everyone's help on this.
With an input stanza this simple, and the confirmation of "Adding watch on path", we can be pretty sure that this isn't a problem of not attempting to monitor your desired file.
To narrow this down further, there are some things you can do.
Have you checked splunkd.log on your forwarders? It might indicate where the issue lies. By default /opt/splunkforwarder/var/log/splunk/splunkd.log . It could be as simple as a permission issue.
Yes. And there is nothing there. I see the following;
02-24-2015 13:37:12.705 -0700 INFO TailingProcessor - Adding watch on path: /var/log/file.log.
02-24-2015 13:37:12.836 -0700 INFO TcpOutputProc - Connected to idx=nn.nn.nn.nn:9997
02-24-2015 13:37:43.612 -0700 INFO TcpOutputProc - Connected to idx=nn.nn.nn.n1:9997
The only thing that I see that is questionable is that when I stop/start the process it doesn't like the "_txhint" attribute I have in the inputs.conf file. But I get that on all the other servers also.
Try bumping up the thruput to max and see.