I know it is a weird question (like how long piece of string), but this is more of values from your experience/real-time practical value in your large clustred environment. We are estimating for how fast Splunk can respond in real-time, but on analysing difference between _time and _indextime , the values are much higher than I thought. It is coming up in 300seconds for 90th Percentile of data.
Just wanted to verify how you guy's systems are looking? is 300 seconds too much or good enough for most of the data?
The normal average for file-based forwarding of events is roughly 100 seconds (syslog should be even smaller) for _indextime
- _time
. Anything bigger than 300 seconds should be investigated, IMHO.
The normal average for file-based forwarding of events is roughly 100 seconds (syslog should be even smaller) for _indextime
- _time
. Anything bigger than 300 seconds should be investigated, IMHO.
thank you mate.