When setting up a Heavy forwarder, do I need to have the index created locally as I do in my indexer cluster? So I am setting up Splunk DB Connect and McAfee and have configured the Splunk server to be a HWP. I am testing writing to an index called bitbucket. In order for this to work, do I need to have a local index called bitbucket as I do in my indexer cluster? I have configured it to not keep a local copy.
Thanks!
As long as the index exists on the indexer - you don't have to create it on the heavy forwarder. However, there is a small quirk. If you are setting up data inputs using SplunkWeb on the heavyforwarder, It doesn't have access to list of indexes on the indexers. In such a scenario, you may need to create an index just so you could use splunkweb. However, in my opinion it is an ugly hack.
As long as the index exists on the indexer - you don't have to create it on the heavy forwarder. However, there is a small quirk. If you are setting up data inputs using SplunkWeb on the heavyforwarder, It doesn't have access to list of indexes on the indexers. In such a scenario, you may need to create an index just so you could use splunkweb. However, in my opinion it is an ugly hack.
Agree. If you're going to setup your data input directly in conf files (inputs.conf), then you don't need local indexes (indexes.conf) on HF. For any other method of creating data input, using Splunk CLI OR Splunk Web, you'd need indexes.conf available on HF (same as what you've on indexer cluster, for CLI it will give you warning for non-existent index but may work, never tried).
Thanks for your answer.
Thank you all for the help... What you guys are saying makes total sense.