Hello everyone,
We are using a Universal Forwarder (UF) as an intermediate forwarder to send logs from other UFs in our on-premises environment to Splunk Cloud.
Whenever Splunk performs maintenance on our Splunk Cloud instance, the intermediate forwarder and all downstream UFs are unable to connect to Splunk Cloud. As a result, the forwarding queues on the forwarders become full.
After the maintenance is completed and the Splunk Cloud instance is back online, the queued logs do not resume forwarding automatically. The data remains stuck in the queue until we manually restart the forwarders.
We have tried increasing the [queue] maxSize settings in server.conf on IF, but this did not resolve the issue.
Has anyone else experienced this behavior, or have suggestions on how to handle it properly?
Thank you very much.
Hi @ThuLe ,
I never experienced this behaviour also because I usually us two Hfs as concentrators to Splunk Cloud.
Anyway, at first did you configured
maxKBps = 0on your concentrator to avoid queues?
If yes, open a case to Splunk Support.
Ciao.
Giuseppe