Monitoring Splunk

Use of persistent queue in UF

hectorvp
Communicator

 In which situation the persistent queue would be used in UF, only if indexer is slow in writing or is down for a long time?

I mean it won't have to do in any cases to drop events in UF,  if UF is crashing???

If UF is crashed and when we restart it  would start resending events in  where he had left off files reading and in-memory queue will be written to persistent queue or would it read again from events source?

 

Labels (2)
1 Solution

inventsekar
SplunkTrust
SplunkTrust

inventsekar
SplunkTrust
SplunkTrust

Hi @hectorvp .. if you use the indexer acknowledgement feature, as per my understanding, there will be no data loss in-flight(dropped data), as it uses the "handshake" logic's. (with this handshake logic, if the UF got crashed, when its resolved and comes up again, indexer will know from where it should read. so the events inside the queues will be re-read and re-processed again).

I have worked on few financial projects in which the client managers will ask these questions, i have explained them as well.
In practical scenarios,
1) the critical data, the clients wont agree to be sent to splunk altogether, thats good. 

2). so, the lesser-critical data, that can be sent to splunk.

3). the data loss when UF down / indexer down is very rare case. lets say around 5 out of 100 cases. and when it happens, those 5 cases, the logs can be re-read.

4) the value of the logs is not that much critical, so we need not worry much on this "worst case" scenarios.

 

(PS - i have given around 350+ karma points so far, received badge for that, if an answer helped you, a karma point would be nice!. we all should start "Learn, Give Back, Have Fun")

inventsekar
SplunkTrust
SplunkTrust

Hi @hectorvp ... regarding persistent queues, the hurricanelabs had written a nice blog:

https://www.hurricanelabs.com/blog/mind-your-ps-and-queues-splunks-universal-forwarder-and-the-impac...

hectorvp
Communicator

Thanks @inventsekar 

I'm just trying to understand worst case;

as mentioned in the blog link which is provided in your response it say if UF crashes then in-memory data is lost;

so if UF crashes and queues were also completely full then 

6MB (parsing queue default size)+ 500KB (output queue default size) = 6.5MB

would be the maximum data lost right, that can ever happen? 

The blog but is only mentioning loss of 500KB in such situation

 

0 Karma
Get Updates on the Splunk Community!

Enter the Splunk Community Dashboard Challenge for Your Chance to Win!

The Splunk Community Dashboard Challenge is underway! This is your chance to showcase your skills in creating ...

.conf24 | Session Scheduler is Live!!

.conf24 is happening June 11 - 14 in Las Vegas, and we are thrilled to announce that the conference catalog ...

Introducing the Splunk Community Dashboard Challenge!

Welcome to Splunk Community Dashboard Challenge! This is your chance to showcase your skills in creating ...