I'm seeing some curious behavior our of two of my heavy forwarders. They aren't reporting data into _internal, but I am seeing app data from things I have installed on them. I checked out the logs on the heavy forwarders and the only errors I see are something about a django secret key missing "The SECRET_KEY setting must not be empty " in the web_service.log
Any idea what would cause this behavior?
Splunk Heavy forwarder will not forward the _internal data by its own. We can add the below to make this forwarding to indexers.
[tcpout]
forwardedindex.3.whitelist = (_internal)
If you have two or more index, please mention this with pipe.
forwardedindex.3.whitelist = (_internal|_audit)
Splunk Heavy forwarder will not forward the _internal data by its own. We can add the below to make this forwarding to indexers.
[tcpout]
forwardedindex.3.whitelist = (_internal)
If you have two or more index, please mention this with pipe.
forwardedindex.3.whitelist = (_internal|_audit)
If I understand correctly, the forwarder is sending data, but not from select indexes (in this case _internal
). That typically means the forwarder is doing it's default behavior and still needs to be configured for sending the _*
data to the indexers.
The documentation at Best practice: Forward search head data to the indexer layer captures the best way to set this up. As with other global config, you'll want your Deployment Server to distribute this configuration to all your endpoints that forward data (not indexers).
Run btool command on your heavy forwarder to see status and configuration of your internal logs.
/opt/splunk/bin/splunk btool inputs list --debug
wild guess, check how much available disk you have on the drive the HF is installed on