Getting Data In

Why is heavy forwarder running out of disk space quickly?

Contributor

Hi,

I have a HF with 250 GB dedicated to /opt directory and this space is being filled up quickly, while I set:
indexAndForward = false in outputs.conf file

Anyone knows why this happens?

Thanks,

0 Karma
1 Solution

Contributor

Great follow-up document that gave me a better understanding on outputs.conf

Thanks,

View solution in original post

0 Karma

Contributor

Great follow-up document that gave me a better understanding on outputs.conf

Thanks,

View solution in original post

0 Karma

Influencer

do you see indexed data in var/lib/splunk on HF?

0 Karma

Contributor

Vijeta - yes, I see indexed data in that dir which I was aware of.
I have 14 indexers in a cluster. Do you think I can remove the indexed data as I only need the HF to parse and forward data to the indexers, not stored data there?

Thanks,

0 Karma

Revered Legend

Check which indexes are filling up the space. If it's _internal index (var/lib/splunk/_internaldb) then you might now be forwarding you internal logs from HF to Indexers. You would need to setup outputs.conf for that. Below link shows steps for that. The below link is for Search Heads but configuration is same for HF as well. (or any other Splunk instance)

https://docs.splunk.com/Documentation/Splunk/8.0.1/DistSearch/Forwardsearchheaddata

0 Karma

Contributor

Great explanation and Splunk doc.

Thank you,

0 Karma

Influencer

If the data is present on indexers , then there is no need of keeping the data on Heavy Forwarder. Please check if you are indexers are having same files after that you can delete these files on heavy forwarder. Also make sure your indexAndForward is not set to true any where on HF. May be start with deleting with few files that are least important to empty the space.

0 Karma
State of Splunk Careers

Access the Splunk Careers Report to see real data that shows how Splunk mastery increases your value and job satisfaction.

Find out what your skills are worth!