From last two days I am not receiving data in my Splunk internal index. Please help me understand this issue .
Hi @uagraw01,
if, in the same period, you're receiving the other logs in the other not internal indexes, this means that you have a congestion of data and internal logs (having a minor priority) are skipper, check the queues in your Forwarders.
Ciao.
Giuseppe
@gcusello Yes we are receiving the data from other indexes in Splunk. We are not using any UF we are using Kafka and it sends data to different indexes.
No no data in internal index creates any issue? Because I want to see internal logs for last 24 hours but nothing registered there.
Hi @uagraw01,
I didn't used Kafka, but when only internal indexes stop to arrive it's usually a queue issue.
Check your queues.
Ciao.
Giuseppe
@gcusello Queue issue from the Splunk side ??
Hi @uagraw01 ,
yes, as I said, i experienced this issue in some Splunk installations when there was a queue congestion in Splunk Data Flow from the Forwarders to the Indexers.
In these cases, the _internal logs have a less priority than the other logs so they arrive late or they don't arrive.
You can check the queue on your forwarders using a simple search:
index=_internal source=*metrics.log sourcetype=splunkd group=queue
| eval name=case(name=="aggqueue","2 - Aggregation Queue",
name=="indexqueue", "4 - Indexing Queue",
name=="parsingqueue", "1 - Parsing Queue",
name=="typingqueue", "3 - Typing Queue",
name=="splunktcpin", "0 - TCP In Queue",
name=="tcpin_cooked_pqueue", "0 - TCP In Queue")
| eval max=if(isnotnull(max_size_kb),max_size_kb,max_size)
| eval curr=if(isnotnull(current_size_kb),current_size_kb,current_size)
| eval fill_perc=round((curr/max)*100,2)
| bin _time span=1m
| stats Median(fill_perc) AS "fill_percentage" perc90(fill_perc) AS "90_perc" max(max) AS max max(curr) AS curr by host, _time, name
| where (fill_percentage>70 AND name!="4 - Indexing Queue") OR (fill_percentage>70 AND name="4 - Indexing Queue")
| sort -_time
Ciao.
Giuseppe
@gcusello How can execute your suggested search? It starts with index=_internal, and there is no data coming from that index.
@gcusello The last event to come from the _internal index was on February 27, 2024, and below is the result of your search. I am pasting it below. Could you please help me understand the issue with the queue.
Hi @uagraw01 ,
the results mean that you have some queue but not so critical (you don't have 100%)
add to your search the host with missed logs and see if there are queues congestion on this host.
Ciao.
Giuseppe
Hi @uagraw01 ,
yes: I found many times that the stop in internal logs forwarding is usually caused by a queue issue from Splunk Side.
Ciao.
Giuseppe
@isoutamo Till to 27th we received all the internal index logs
We are using standalone Splunk server and their no monitoring console setup. Internal index logs are still not visible to me and without it, not able to troubleshoot further. Please help me what are the other workarounds are available to get the data in from internal indexes again.
@isoutamo As per the PDF shared by you,. The below navigation data belongs to _internal index, and we are currently not getting any events from _internal index. Is there any approach in which I can enable the revive the _internal index data in Splunk.
You could look that same data from OS level from some of those log under $SPLUNK_HOME/var/log/splunk/
There are at least splunkd.log metrics.log etc. Those contains all same data as you have in _internal. Of course you must have shell level access to those all source hosts to see this.
You should just look couple of pages later where is said "Using "grep" cli command". In that and some pages after that is told/show how you can do it on command line with those log files like metrics.log.