Hello everyone:
I have an implementation with a head and 2 indexers, in one of my indexers the index = _intenaldb, has grown exponcally and is about to saturate my disk, recommend erase it? Or what I must do to keep this from happening
fer_tlaloc,
Check if you some how enabled "DEBUG" . You can check "index=_internal DEBUG", if you spot these, may be you are running in DEBUG, logs grow faster when they are in debug and not recommended to enable in prod unless you are troubleshooting something for a shorter period.
Next check your retention settings indexes.conf - every customer would like to have different retention policies for the internal logs. internal logs have default retention of 30days, unless you changed it. If you do find they have longer retention then reduce it to the times that better work for you and restart splunk, so they will be archived (if you have the frozen settings) or they will be deleted.
Also check out other answers on the internal logs
https://answers.splunk.com/answers/26834/audit-and-internal-index-data-retention.html
hello there,
can you elaborate a little?
what is the outcome you are seeking?
why did it grow so much? did you have some warnings / errors and splunk wrote tons of messages per mintue or second?
what is the desired retention period on your splunk internal data?
Hi, thanks for the help.
Specifically I have 1 TB on this server, for 30 GB a day, and I have high stick and karspersky there, this week I noticed that my intenaldb was for 239 GB, and I have the disk 98%, I would not delete it, I will back it up , But why does it arrive so much? Or what do you advise me?
if you want, you can move it to frozen on a location of your choice, read here:
https://docs.splunk.com/Documentation/Splunk/6.6.2/Indexer/Setaretirementandarchivingpolicy
as to why is grew so much, did you check if you had erroes / warnings swamping your internal index? did you enable debug as suggested in the answer below? you can review and see it where the spike in data happened by searching timechart count or better:
| tstats count where index = _internal by _time
and try and think what changes at this particular time
Adonio thanks fr your help, ¡¡ Mi index its OK 🙂