Monitoring Splunk

Use of persistent queue in UF

hectorvp
Communicator

 In which situation the persistent queue would be used in UF, only if indexer is slow in writing or is down for a long time?

I mean it won't have to do in any cases to drop events in UF,  if UF is crashing???

If UF is crashed and when we restart it  would start resending events in  where he had left off files reading and in-memory queue will be written to persistent queue or would it read again from events source?

 

Labels (2)
1 Solution

inventsekar
SplunkTrust
SplunkTrust

inventsekar
SplunkTrust
SplunkTrust

Hi @hectorvp .. if you use the indexer acknowledgement feature, as per my understanding, there will be no data loss in-flight(dropped data), as it uses the "handshake" logic's. (with this handshake logic, if the UF got crashed, when its resolved and comes up again, indexer will know from where it should read. so the events inside the queues will be re-read and re-processed again).

I have worked on few financial projects in which the client managers will ask these questions, i have explained them as well.
In practical scenarios,
1) the critical data, the clients wont agree to be sent to splunk altogether, thats good. 

2). so, the lesser-critical data, that can be sent to splunk.

3). the data loss when UF down / indexer down is very rare case. lets say around 5 out of 100 cases. and when it happens, those 5 cases, the logs can be re-read.

4) the value of the logs is not that much critical, so we need not worry much on this "worst case" scenarios.

 

(PS - i have given around 350+ karma points so far, received badge for that, if an answer helped you, a karma point would be nice!. we all should start "Learn, Give Back, Have Fun")

inventsekar
SplunkTrust
SplunkTrust

Hi @hectorvp ... regarding persistent queues, the hurricanelabs had written a nice blog:

https://www.hurricanelabs.com/blog/mind-your-ps-and-queues-splunks-universal-forwarder-and-the-impac...

hectorvp
Communicator

Thanks @inventsekar 

I'm just trying to understand worst case;

as mentioned in the blog link which is provided in your response it say if UF crashes then in-memory data is lost;

so if UF crashes and queues were also completely full then 

6MB (parsing queue default size)+ 500KB (output queue default size) = 6.5MB

would be the maximum data lost right, that can ever happen? 

The blog but is only mentioning loss of 500KB in such situation

 

0 Karma

hrawat
Splunk Employee
Splunk Employee

Persistent queue is not for the file monitoring. 

0 Karma
Got questions? Get answers!

Join the Splunk Community Slack to learn, troubleshoot, and make connections with fellow Splunk practitioners in real time!

Meet up IRL or virtually!

Join Splunk User Groups to connect and learn in-person by region or remotely by topic or industry.

Get Updates on the Splunk Community!

[Puzzles] Solve, Learn, Repeat: Matching cron expressions

This puzzle (first published here) is based on matching timestamps to cron expressions.All the timestamps ...

Design, Compete, Win: Submit Your Best Splunk Dashboards for a .conf26 Pass

Hello Splunkers,  We’re excited to kick off a Splunk Dashboard contest! We know that dashboards are a primary ...

May 2026 Splunk Expert Sessions: Security & Observability

Level Up Your Operations: May 2026 Splunk Expert Sessions Whether you are refining your security posture or ...