For one splunk performance issue it was mentioned to us as below
"We have a wait time of 57ms on the Search Head and a wait time of 25ms on the indexer. A wait time of maximum 10ms is good for proper processing in Splunk. This could be the reason for the storage performance issue."
Could someone please let us know is there a parameter which denotes wait time on search head and indexers? If yes, please provide us some insight on what is that parameter and details about it.
We had a similar finding from Splunk with high I/O wait time on Search Heads. I have used the folllowing search to monitor
index=_introspection sourcetype=splunk_resource_usage component=IOStats
| eval avg_wait_ms = 'data.avg_total_ms'
| search data.mount_point="/apps/splunk"
| eval sla=10
| timechart limit=30 minspan=60s partial=f avg(data.avg_total_ms) as avg_wait_ms max(sla) AS sla by host
Use a trellis format (split by host) timechart to dispaly. The sla=10 field is to show the 10ms Splunk recommended limit.
I haven't been able to work out why we have high I/O on the Search Heads though, indexer cluster seems to perform OK. The Search Head Captain has notably higher I/O wait compared to others. There has also been issues with KV Store so wondering if that is related.
Note: I/O wait time is not a configuration that can be set. It is the result of the operations being carried out on the disk