Splunk Dev

Problemas with a index= _internaldb

fer_tlaloc
New Member

Hello everyone:

I have an implementation with a head and 2 indexers, in one of my indexers the index = _intenaldb, has grown exponcally and is about to saturate my disk, recommend erase it? Or what I must do to keep this from happening

0 Karma

bheemireddi
Communicator

fer_tlaloc,

Check if you some how enabled "DEBUG" . You can check "index=_internal DEBUG", if you spot these, may be you are running in DEBUG, logs grow faster when they are in debug and not recommended to enable in prod unless you are troubleshooting something for a shorter period.

Next check your retention settings indexes.conf - every customer would like to have different retention policies for the internal logs. internal logs have default retention of 30days, unless you changed it. If you do find they have longer retention then reduce it to the times that better work for you and restart splunk, so they will be archived (if you have the frozen settings) or they will be deleted.
Also check out other answers on the internal logs

https://answers.splunk.com/answers/26834/audit-and-internal-index-data-retention.html

0 Karma

adonio
Ultra Champion

hello there,
can you elaborate a little?
what is the outcome you are seeking?
why did it grow so much? did you have some warnings / errors and splunk wrote tons of messages per mintue or second?
what is the desired retention period on your splunk internal data?

0 Karma

fer_tlaloc
New Member

Hi, thanks for the help.

Specifically I have 1 TB on this server, for 30 GB a day, and I have high stick and karspersky there, this week I noticed that my intenaldb was for 239 GB, and I have the disk 98%, I would not delete it, I will back it up , But why does it arrive so much? Or what do you advise me?

0 Karma

adonio
Ultra Champion

if you want, you can move it to frozen on a location of your choice, read here:
https://docs.splunk.com/Documentation/Splunk/6.6.2/Indexer/Setaretirementandarchivingpolicy
as to why is grew so much, did you check if you had erroes / warnings swamping your internal index? did you enable debug as suggested in the answer below? you can review and see it where the spike in data happened by searching timechart count or better:
| tstats count where index = _internal by _time
and try and think what changes at this particular time

0 Karma

fer_tlaloc
New Member

Adonio thanks fr your help, ¡¡ Mi index its OK 🙂

0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...