Hi folks,
I have an issue with a HF, I'm getting some spikes reaching the 100% when sending data to Splunk Cloud. This happens every 30 seconds approx.
I think this is because of the amount of data we are sending, this is also causing that all data get with a delay to Splunk Cloud, I mean the _time and indextime is different in all data because of this.
So I have some questions:
1- How can I check if I'm sending a big amount of data at similar times during the day? Do you have a query I can use or a dashboard?
2- What are your recommendation to distribute the big data to be sent at different times?
I really appreciate your help on this. Thanks in advance!
Hi @splunk_luis12,
you have to analyze queues, using something like this:
index=_internal source=*metrics.log sourcetype=splunkd group=queue
| eval name=case(name=="aggqueue","2 - Aggregation Queue",
name=="indexqueue", "4 - Indexing Queue",
name=="parsingqueue", "1 - Parsing Queue",
name=="typingqueue", "3 - Typing Queue",
name=="splunktcpin", "0 - TCP In Queue",
name=="tcpin_cooked_pqueue", "0 - TCP In Queue")
| eval max=if(isnotnull(max_size_kb),max_size_kb,max_size)
| eval curr=if(isnotnull(current_size_kb),current_size_kb,current_size)
| eval fill_perc=round((curr/max)*100,2)
| bin _time span=1m
| stats Median(fill_perc) AS "fill_percentage" by host, _time, name
| where (fill_percentage>70 AND name!="4 - Indexing Queue") OR (fill_percentage>70 AND name="4 - Indexing Queue")
| sort -_time
In this way you can understand when you have more data.
Ciao.
Giuseppe