Deployment Architecture

Why are 1 of 3 forwarders not sending data configured in inputs.conf, only splunkd.log to the _internal index?

Builder

I'm setting up forwarders on 4 servers collecting log files. 3 are running fine, sending the correct log files to the correct index. But one is sending only /opt/splunkforwarder/var/log/splunk/splunkd.log to the _internal index and nothing to the index I created. I have no idea what's going on and could use a little help.

All 4 servers are set up with the same basic information;

inputs.conf file

[default]
host = servername-location

# Requested log file
[monitor:///var/log/file.log]
disabled=false
index=myindex
_tzhint=America/Denver

outputs.conf file

[tcpout]
defaultGroup = default-autolb-group

[tcpout:default-autolb-group]
server = nn.nn.nn.nn:9997, nn.nn.nn.n1:9997

These are the only 2 files that I changed from the install on all of the forwarders.

I installed splunkforwarder-5.0.5-179365-Linux-x86_64 on these servers.

Any and all help would be greatly appreciated.

0 Karma
1 Solution

Builder

Sorry about the fire drill but here's what happened. Everything is working fine, and always was. The problem is the vendor decided to name all the appliances the same. Different IPs, but the same host name. The data was being sent to Splunk but everything ended up under the 1 hostName. I have another question opened on how to change the reporting hostName infomation in Splunk conf files. Thanks for everyone's help on this.

View solution in original post

Builder

Sorry about the fire drill but here's what happened. Everything is working fine, and always was. The problem is the vendor decided to name all the appliances the same. Different IPs, but the same host name. The data was being sent to Splunk but everything ended up under the 1 hostName. I have another question opened on how to change the reporting hostName infomation in Splunk conf files. Thanks for everyone's help on this.

View solution in original post

Splunk Employee
Splunk Employee

With an input stanza this simple, and the confirmation of "Adding watch on path", we can be pretty sure that this isn't a problem of not attempting to monitor your desired file.

To narrow this down further, there are some things you can do.

  • You can look for any other messages regarding your /var/log/file.log in the _internal index for this host. When tailing opens a file for reading, it logs a message stating the byte offset, so if this never appears, it was probably never opened.
  • You can look at the metrics data in metrics.log (directly or via search) to see if per_source_thruput ever mentions this file. You could have, for example, a problem where we forward the data to the indexer but it doesn't know about "myindex".
  • You can review the tailng status endpoint to see what information exists about the file http://blogs.splunk.com/2011/01/02/did-i-miss-christmas-2/
0 Karma

Builder

Have you checked splunkd.log on your forwarders? It might indicate where the issue lies. By default /opt/splunkforwarder/var/log/splunk/splunkd.log . It could be as simple as a permission issue.

0 Karma

Builder

Yes. And there is nothing there. I see the following;

02-24-2015 13:37:12.705 -0700 INFO TailingProcessor - Adding watch on path: /var/log/file.log.
02-24-2015 13:37:12.836 -0700 INFO TcpOutputProc - Connected to idx=nn.nn.nn.nn:9997
02-24-2015 13:37:43.612 -0700 INFO TcpOutputProc - Connected to idx=nn.nn.nn.n1:9997

The only thing that I see that is questionable is that when I stop/start the process it doesn't like the "_txhint" attribute I have in the inputs.conf file. But I get that on all the other servers also.

0 Karma

Motivator

Try bumping up the thruput to max and see.

0 Karma
State of Splunk Careers

Access the Splunk Careers Report to see real data that shows how Splunk mastery increases your value and job satisfaction.

Find out what your skills are worth!