Hello there,
I have an issue with the internal index of my indexers (_audit, _introspection, _metrics) because, for an unknown reason, the logs didn't rotate properly and always exceed the defined Max Size of the internal indexes.
For example, for _metrics, we defined 500 Mo max size, but we actually have 6,41 Go of logs...
How can I manage to force the internal index to respect the Max Size defined ?
Thanks for your help
Cheers
There is much going on "behind the scenes" when we're talking about managing indexes space and retention. There is also another layer of complexity regarding configuration files processing.
1. There is a layering mechanism combining config files into an effective config version. See https://help.splunk.com/en/data-management/splunk-enterprise-admin-manual/9.4/administer-splunk-ente... for details. In case of clustered indexers, the configuration pushed from CM via etc/peer-apps should have the highest precedence. Use btool to verify. For example
splunk btool indexes list _internal --debug
2. The limits are only approximate here and there (as the config spec pages say explicitly in several places).
3. Most importantly - Splunk manages space by means of whole buckets. And does it using a housekeeping thread which is woken up periodically. So it's not a synchronous operation, quite the opposite. Every now and then the housekeeping thread checks which indexes need to be "tidied up" and then initiates appropriate actions (rolling to the next tier, maybe initiating additional replication and so on).
Hi,
The configuration wasn't at fault here; it was simply the fact that the _internal index contained the _internal indexes of the other instances that led me astray.
But your answer still helped me, so thanks ^^
I'm not sure what you mean by "contained _internal indexes of other instances".
In a well-engineered Splunk environment, all events from the whole Splunk infrastructure should be sent to your indexers. So your SHs, HFs and so on should in general _not_ store their data locally. So they do not have "their own" _internal indexes. They send their events destined for _internal index to the indexers.
So the indexers should store in their _internal index events coming from all components across your whole Splunk infrastructure (including forwarders).
Hi @spoonmaniac
Please can you confirm the setting you have where you're specifying the max size?
If this is using maxTotalDataSizeMB then its important to note that this is the maxTotalDataSizeMB per Index per Peer Node, not at a global level, replication factor etc also factors into this so its not a simple calculation.
How many indexers do you have? Is it the maxTotalDataSizeMB that you are setting? Also I think if the hot buckets are allowed to grow larger than this then this may only take effect once it roles, however I need to double check this.
🌟 Did this answer help you? If so, please consider:
Your feedback encourages the volunteers in this community to continue contributing
Hi,
thanks for your answer ;
in fact, I have 4 indexers + a cluster of 3 search Head + 3 search Head for specific needs ;
so, I round up every _internal indexes of these instances + every UF an HF send a little bit of their _internal indexes
When I add up all these sources, I get the total shown on the interface
Now I know it's a normal behaviour, it's even logical at the end, but it doesn't look like it at a first glance.
Thanks for your answer.
You are talking about cluster or individual indexers? It's easier to help if you tell more about your case.
If 1st one, then you must create own app under CM's $SPLUNK_HOME/etc/{master, manager}-apps or use _cluster/local/indexes.conf (under previous directory) to manage those indexes in cluster.
If those are individual indexers then you should manage those with GUI or app which you install into those nodes.
I'm talking about cluster indexer ;
the _cluster/local/indexes.conf are correct, the limit size defined in this file are correctly applied as Max Size in the indexer.
I check if a configuration conflict was there (local configuration in the indexers or another indexes.conf in etc/{master, manager}-app) but I didn't catch any.
I don't know what to think next
Have you try btool on those indexers to see what are valid options in use and are those same in all nodes? Another option is use admin little helper app https://splunkbase.splunk.com/app/6368 with it you could see those e.g. from CM or MC.
I like to say that this is better tool to do checks over several instances.
Can you paste those values here?