Hello everyone,
We have noticed a sudden and unexpected increase in daily license usage in our Splunk environment over the past few days, causing the license threshold to be exceeded.
While investigating the source of this increase, we identified an index that had previously generated a high volume of data. However, since the license usage started exceeding the limit, this same index is now showing 0 events ingested.
This behavior seems inconsistent, as the index that appears related to the spike is no longer ingesting any data.
Has anyone encountered a similar issue before?
Thank you in advance.
OK. If the index is _now_ showing no ingestion, it doesn't mean that those events aren't still there, right? Or do you have such a short retention period that they have already rolled out to frozen?
Can you just look into what those events are? What caused them?
There can be several reasons for this. One is that source (or one of the sources) simply malfunctioned and started producing a lot of logs which you'd normally not expect to ingest (like debug logs or stuff like that).
Another typical use case is if you configure new source (or new input) which has some pre-existing data to ingest. I did this once at home - I told Splunk to ingest my exim logs forgetting that it had some 3 or 5 years of backlog to index.
Of course there is also a possibility that someone misconfigured something or was a bit trigger-happy with the "collect" command.
Hi @BRFZ
How are you sending the data to your Indexers? Did you make any changes to the ingestion when you started seeing the spike to try and reduce it?
My gut feeling would be that something higher up the chain has crashed, ie the forwarder or intermediate forwarder - Are you able to see any _internal logs from the forwarder sending the data (assuming the data is sent via a forwarder)?
🌟 Did this answer help you? If so, please consider:
Your feedback encourages the volunteers in this community to continue contributing