I have a Splunk server that receives data from 2 normal (not light) forwarders.
In the forwarders, I had to create a local index with the same name as the server and I'm using local file monitors.
Everything is OK and data is getting to the server as it should.
The problem is, I can't seem to monitor the activity on the forwarders: 1. there are no events in the _internal indexes 2. the default activity searches return no data 3. in the main server, I also cannot find _internal index data related to the forwarder's activity
A few days ago, one of the forwaders had splunkd using > 10% of CPU and I have no way of knowing or diagnosing the problem.
So, I should I monitor the forwarders' activity ?
Forwarders don't forward their _internal activity by default. Add the following to your inputs.conf:
[monitor://$SPLUNK_HOME/var/log/splunk/splunkd.log]
_TCP_ROUTING = *
index = _internal
You can utilize the settings in the LightForwarder app that send the internal logs onto the indexer. However, this only changes where the logs are located. To do this, you must edit the inputs.conf on your forwarder to reflect the following:
[monitor://$SPLUNK_HOME/var/log/splunk/splunkd.log]
_TCP_ROUTING = *
index = _internal
[monitor://$SPLUNK_HOME/var/log/splunk/metrics.log]
_TCP_ROUTING = *
index = _internal
10% utilization is not uncommon for a full forwarder if it is doing some heavy processing. Without more detail of your inputs, this will be hard to diagnose.
Forwarders don't forward their _internal activity by default. Add the following to your inputs.conf:
[monitor://$SPLUNK_HOME/var/log/splunk/splunkd.log]
_TCP_ROUTING = *
index = _internal