Getting Data In

Where do I start to look for Forwarder issues?

justin_deutsch
Explorer

I have a a number of light weight forwarders pointing to a single heavy forwarder point which in turn points to a single search head/indexer. The heavy forwarder has DB Connect installed.

Versions
Light Weight Forwarders: 6.1.0
Heavy Forwarder: 6.0.1
Search Head/Indexer: 6.0.1
(yes I know that we should upgrade the heavy forwader and search head/indexer)

Up until 31 Oct everything was working fine and then some events have stopped being forwarded to the indexer. For example, I can see that the database I am monitoring is generating the batch files on the forwarder and they are being processed in the logs on the server, but I can't see them when I search the index.

I would like to know is where I should start looking to see why the events aren't being forward. Any help would be appreciated.

Tags (2)
1 Solution

fabiocaldas
Contributor

You can start by reading splunkd.log files on your forwarders and indexer, it's can be found at $SPLUNK_HOME$/var/log/splunk folder. This log use be very helpful.

In my case I use a pair of applications to help me identify problems. The first app is s.o.s (Splunk on Splunk) installed on my search head and I have s.o.s add-on installed in all my Splunk Instance provinding an overall view about all machines. I also use another app to help me understand better my forwarders, it's called Forwarder Health.

View solution in original post

fabiocaldas
Contributor

You can start by reading splunkd.log files on your forwarders and indexer, it's can be found at $SPLUNK_HOME$/var/log/splunk folder. This log use be very helpful.

In my case I use a pair of applications to help me identify problems. The first app is s.o.s (Splunk on Splunk) installed on my search head and I have s.o.s add-on installed in all my Splunk Instance provinding an overall view about all machines. I also use another app to help me understand better my forwarders, it's called Forwarder Health.

Get Updates on the Splunk Community!

Message Parsing in SOCK

Introduction This blog post is part of an ongoing series on SOCK enablement. In this blog post, I will write ...

Exploring the OpenTelemetry Collector’s Kubernetes annotation-based discovery

We’ve already explored a few topics around observability in a Kubernetes environment -- Common Failures in a ...

Use ‘em or lose ‘em | Splunk training units do expire

Whether it’s hummus, a ham sandwich, or a human, almost everything in this world has an expiration date. And, ...