Dashboards & Visualizations

Splunk HEC Token Log ingestion slowness

ram254481493
Explorer

i am using HEC tokens to collect the logs from servers. Sometimes we are firing the events but the events is not coming to splunk. We have one indexer and our all tokens are managed on the indexer only. The moment we restart the indexer , all the logs will comes up frequently. If we dont restart we are not getting logs sending from HEC Tokens ?

what is the issue , how can i fix it ? why most of the time only a restart is pulling the logs ?

Tags (1)
0 Karma
1 Solution

tiagofbmm
Influencer

I'd check the monitoring console for queues getting filled up on your indexers. When you restart, they get cleaned up so maybe that's a reason for it.

Check queue fill ratio in Indexer Performance of Monitoring Console

View solution in original post

0 Karma

tiagofbmm
Influencer

I'd check the monitoring console for queues getting filled up on your indexers. When you restart, they get cleaned up so maybe that's a reason for it.

Check queue fill ratio in Indexer Performance of Monitoring Console

0 Karma

ram254481493
Explorer

Now i checked i saw everything in 0 their , do i need to check when the issue comes again , and if its the case how can i clear those ?

0 Karma

tiagofbmm
Influencer

You can't clear the queues. They are there to avoid a total Splunk crash and serve as a buffer that will get filled and emptied according to data flow rate and processing capacity. If you see that queues are full when you stop receiving events or are receiving too few events, then it is time for evaluation.

Maybe you have not adequate machines to ingest that amount of data, but I'm purely speculating. Check the indicators in the queues, and resource in general for your indexer layer to see if it is overflowing.

0 Karma

ram254481493
Explorer

ok got it apart from it is their any other issues where we will point out the delay ?

0 Karma

tiagofbmm
Influencer

I'd say it has to be resource consumption, either queues filling up, RAM or CPUs, or even your network not coping with the volume, all that can be analyzed in the Monitoring Console

0 Karma

ram254481493
Explorer

Hi tiago , i checked again the issue came , the quefill ratio everything looks good but dont know why logging sometimes stopped then after restart of the indexer all stucked logs came is it due to less logging volume ?

0 Karma

ram254481493
Explorer

ok thanks tiago i will monitor when net time this issue will comes up.

0 Karma
Get Updates on the Splunk Community!

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...

Cloud Platform & Enterprise: Classic Dashboard Export Feature Deprecation

As of Splunk Cloud Platform 9.3.2408 and Splunk Enterprise 9.4, classic dashboard export features are now ...