Hi All,
I would like to confirm whether a persistent queue can be used on an Intermediate Heavy Forwarder that receives logs from Universal Forwarders (UFs) and forwards them to Splunk Cloud.
After reviewing the documentation, I noticed there are certain restrictions on where a persistent queue can be enabled.
Use case:
We want to ensure that logs are stored locally on the Heavy Forwarder for at least 5–6 hours in case of a major outage where the Heavy Forwarder is unable to communicate with Splunk Cloud. Once the connection is restored, all logs stored in the queue or on disk should be forwarded to Splunk Cloud without any data loss. Our daily ingestion is around 300GB
I would appreciate your recommendations on the best steps to implement this setup.
this is the inputs.conf
[splunktcp://9997]
disabled = 0
connection_host = ip
Hi @tech_g706
You can create a persistent queue for your input using the following config:
persistentQueueSize = <integer>[KB|MB|GB|TB]However I would strongly recommend reading https://help.splunk.com/en/splunk-enterprise/get-started/get-data-in/9.4/improve-the-data-input-proc... for more information on how it works, gotchas etc before implementing.
Remember that there is still risk of data loss if the HF fails whilst queuing data.
🌟 Did this answer help you? If so, please consider:
Your feedback encourages the volunteers in this community to continue contributing