Getting Data In

Windows Universal Forwarder unable to read log files

Explorer

Hi all,

In our environment, we have several Windows UF managed by a deployment server. We didn´t apply any change on the forwarders, and some of them are unable to send some of the data to the indexers. The data we are not receiving is the one wich come from a file (internal logs, monitoring files, etc). TCP/UDP inputs are working fine.

We have checked the permissions, and splunk user has total control over splunk folders and log files folders. We also reset fishbucket in order to discard any issue with it. No errors appear on the splunkd log inside the UF.

Does anybody know how to troubleshoot this?

Thanks in advance.

Labels (2)
0 Karma

Legend

Hi @pbalbasdtt,
When you say "Actually logs wich doesn't come from files are being indexed", are you speaking of logs from other hosts (eventually syslogs) or other logs from that host?

if you don't receinve internal logs, it means that there's an error in Indexers addressing, please, check the outputs.conf file and verify if the addressing is correct.

Did you enabled receiving in Indexers on the same port configured on Forwarders?

Ciao.
Giuseppe

0 Karma

Explorer

Hi Giuseppe,

I mean other logs from that host.

outputs.conf we are using is the same as the other UF and we are not having any issues on them.

Best regards.

0 Karma

Legend

Hi @pbalbasdtt,
which logs are you receiving from that host?
please chack again index=_internal host=your_hostname it isn't possible that you receive logs from that Forwarder and you don't receive internal logs!

could you share your inputs.conf?

Ciao.
Giuseppe

0 Karma

Explorer

Hi,

We are not receiving internal logs since 4 days ago. We were receiving paloalto logs via this input:

[monitor://F:\logs\paloalto_logs\paloalto*.txt]
index=paloalto_index
sourcetype=pan_log
disabled = 0
crcSalt = <SOURCE>

As this start failing, we created a tcp input in UF wich is actually working:

[tcp://10702]
connection_host = dns
index = paloalto_index
sourcetype = pan_log

It looks like for any reason the UF is not able to read files even it has permission over the folders.

Best.

0 Karma

Legend

Hi @pbalbasdtt,
from 4 days you're not receiving any log fro the Universal Forwarder, is it correct? you're only receiving syslogs (TCP input) from that server into another one, is it correct?

is the TCP input received by whick host? not the same Universal Forwarder!

Ciao.
Giuseppe

0 Karma

Legend

Hi @pbalbasdtt,
at first check if you're receiving internal logs:

index=_internal host=your_host

If not the problem is on the route and you can check this using telnet form the Forwarder:

telnet ip_indexer 9997

If yes, at first, check if in the Forwarder are deployed the correct TAs from the Deployment Server and especially if there's some overlap:
you can do this watching the apps and/or using the command:

splunk cmd btool inputs list --debug 

Then check if all the inputs are enabled.

Ciao.
Giuseppe

0 Karma

Explorer

Hi Giuseppe,

Thanks for your response! We have checked all of that previously.

  • We are not receiving any _internal logs from that host.
  • We can telnet from the UF to the 9997 on the indexer. Actually logs wich doesn't come from files are being indexed.
  • TA's are the correct ones and all the necessary inputs are enable.

Best regards.

0 Karma

Motivator

Hello @pbalbasdtt,

try directly on the UF:

./splunk _internal call /services/admin/inputstatus/TailingProcessor:FileStatus

do you see anything in the output related to the monitored files?

Additionally check $SPLUNK_HOME/var/log/splunk/splunkd.log for any WARN and ERROR.

If the internals logs are being send from the UF to the index layer and the REST API is accessable, then you can check both from the SH.

0 Karma