Monitoring Splunk

Use of persistent queue in UF


 In which situation the persistent queue would be used in UF, only if indexer is slow in writing or is down for a long time?

I mean it won't have to do in any cases to drop events in UF,  if UF is crashing???

If UF is crashed and when we restart it  would start resending events in  where he had left off files reading and in-memory queue will be written to persistent queue or would it read again from events source?


Labels (2)
1 Solution



Hi @hectorvp .. if you use the indexer acknowledgement feature, as per my understanding, there will be no data loss in-flight(dropped data), as it uses the "handshake" logic's. (with this handshake logic, if the UF got crashed, when its resolved and comes up again, indexer will know from where it should read. so the events inside the queues will be re-read and re-processed again).

I have worked on few financial projects in which the client managers will ask these questions, i have explained them as well.
In practical scenarios,
1) the critical data, the clients wont agree to be sent to splunk altogether, thats good. 

2). so, the lesser-critical data, that can be sent to splunk.

3). the data loss when UF down / indexer down is very rare case. lets say around 5 out of 100 cases. and when it happens, those 5 cases, the logs can be re-read.

4) the value of the logs is not that much critical, so we need not worry much on this "worst case" scenarios.


(PS - i have given around 350+ karma points so far, received badge for that, if an answer helped you, a karma point would be nice!. we all should start "Learn, Give Back, Have Fun")


Hi @hectorvp ... regarding persistent queues, the hurricanelabs had written a nice blog:


Thanks @inventsekar 

I'm just trying to understand worst case;

as mentioned in the blog link which is provided in your response it say if UF crashes then in-memory data is lost;

so if UF crashes and queues were also completely full then 

6MB (parsing queue default size)+ 500KB (output queue default size) = 6.5MB

would be the maximum data lost right, that can ever happen? 

The blog but is only mentioning loss of 500KB in such situation


0 Karma
Get Updates on the Splunk Community!

.conf24 | Day 0

Hello Splunk Community! My name is Chris, and I'm based in Canberra, Australia's capital, and I travelled for ...

Enhance Security Visibility with Splunk Enterprise Security 7.1 through Threat ...

 (view in My Videos)Struggling with alert fatigue, lack of context, and prioritization around security ...

Troubleshooting the OpenTelemetry Collector

  In this tech talk, you’ll learn how to troubleshoot the OpenTelemetry collector - from checking the ...