- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
splunk universal forwarder
Hi
Actually we are forwarding data from 2 forwarders servers to the indexer server, from one forwarder server we are receiving the data in indexer server and in search head also we can see data but we are not receiving the data in indexer server from other forwarder server.
Even in the forwarder server logs we can see it is connected to indexer but logs are not getting forward to the indexer server.
We can see below logs in splunkd as below
12-07-2020 12:38:07.059 +0100 INFO TcpOutputProc - Connected to idx=xx.xx.xx.xx:9998, pset=0, reuse=0.
12-07-2020 12:38:07.091 +0100 INFO WatchedFile - Will begin reading at offset=20396720 for file='E:\Apps\SplunkUniversalForwarder\var\log\splunk\metrics.log'.
12-07-2020 12:38:07.106 +0100 INFO WatchedFile - File too small to check seekcrc, probably truncated. Will re-read entire file='E:\Apps\SplunkUniversalForwarder\var\log\splunk\license_usage.log'.
12-07-2020 12:38:07.153 +0100 INFO WatchedFile - File too small to check seekcrc, probably truncated. Will re-read entire file='E:\Apps\SplunkUniversalForwarder\var\log\splunk\remote_searches.log'.
Below log can see in helat.log
TCPOutAutoLB-0 - More than 70% of forwarding destinations have failed
Outputs.conf file
[tcpout]
defaultGroup = lb
[tcpout:lb]
server =xxx.xxx.com:9998
autoLB = true
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks @96nick
Actaully we are using outputs.conf in other folder also (E:\APPS\SplunkUniversalForwarder\etc\system\local) apart from E:\APPS\SplunkUniversalForwarder\etc\apps\SplunkUniversalForwarder\local due to that it is processing data to old indexer server
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
A few things I'd look at first:
- Do a diff on each of the files in your config that have to do with forwarding to see if there are any differences
- Check the _internal logs on your server
- Check the networking
You didn't mention if the two forwarders were on the same or different networks, but if they are there could be a firewall blocking the events from arriving to your indexer. Run a search on your indexer like "index=_internal source=*splunkd.log host={your-host-that-doesn't-work}" and see what shows up.
Run a search for that specific host on your indexer with 'All Time'. Sounds crazy, but the timestamps might be wack. Also make sure you can view the index you are sending your data (especially if you recently created your user account). I know I've made that mistake before :).
Official docs regarding your issue:
Hope that helped!
