Hello,
I have a usecase where few servers stopped ingesting for 3-4 hrs when the user is doing performance testing on those servers and then servers started ingesting again automatically, I am not sure what caused the ingestion to stop.
During the time when the ingestion has stopped, logs are still available on the server
Please help me with the troubleshooting on what might have caused for this issue and how I can remediate this?
Thanks in advance
Hi @Roy_9,
probably the performance tests gave an overload on resources so your hardware hasn't the necessary resources to read and forwarrd logs.
You can test queues using this search:
index=_internal source=*metrics.log sourcetype=splunkd group=queue host=<your_host>
| eval name=case(name=="aggqueue","2 - Aggregation Queue",
name=="indexqueue", "4 - Indexing Queue",
name=="parsingqueue", "1 - Parsing Queue",
name=="typingqueue", "3 - Typing Queue",
name=="splunktcpin", "0 - TCP In Queue",
name=="tcpin_cooked_pqueue", "0 - TCP In Queue")
| eval max=if(isnotnull(max_size_kb),max_size_kb,max_size)
| eval curr=if(isnotnull(current_size_kb),current_size_kb,current_size)
| eval fill_perc=round((curr/max)*100,2)
| bin _time span=1m
| stats Median(fill_perc) AS "fill_percentage" max(max) AS max max(curr) AS curr by host, _time, name
| where (fill_percentage>70 AND name!="4 - Indexing Queue") OR (fill_percentage>70 AND name="4 - Indexing Queue")
| sort -_time
If you'll find queues at 100%, you'll have found the reason of the stop.
Ciao.
Giusepep
Hello @gcusello
The parsing queues fill percentages are less than 70% when the testing was run, I am not sure on what other factors that are causing the issue.Do yo have any thoughts?
Thanks
Hi @Roy_9,
I can only suppose that tha testing activities have an hogher priority than Splunk activities, so they have to wait for the end of the other activities.
Ciao.
Giuseppe