Splunk Dev

Problemas with a index= _internaldb

fer_tlaloc
New Member

Hello everyone:

I have an implementation with a head and 2 indexers, in one of my indexers the index = _intenaldb, has grown exponcally and is about to saturate my disk, recommend erase it? Or what I must do to keep this from happening

0 Karma

bheemireddi
Communicator

fer_tlaloc,

Check if you some how enabled "DEBUG" . You can check "index=_internal DEBUG", if you spot these, may be you are running in DEBUG, logs grow faster when they are in debug and not recommended to enable in prod unless you are troubleshooting something for a shorter period.

Next check your retention settings indexes.conf - every customer would like to have different retention policies for the internal logs. internal logs have default retention of 30days, unless you changed it. If you do find they have longer retention then reduce it to the times that better work for you and restart splunk, so they will be archived (if you have the frozen settings) or they will be deleted.
Also check out other answers on the internal logs

https://answers.splunk.com/answers/26834/audit-and-internal-index-data-retention.html

0 Karma

adonio
Ultra Champion

hello there,
can you elaborate a little?
what is the outcome you are seeking?
why did it grow so much? did you have some warnings / errors and splunk wrote tons of messages per mintue or second?
what is the desired retention period on your splunk internal data?

0 Karma

fer_tlaloc
New Member

Hi, thanks for the help.

Specifically I have 1 TB on this server, for 30 GB a day, and I have high stick and karspersky there, this week I noticed that my intenaldb was for 239 GB, and I have the disk 98%, I would not delete it, I will back it up , But why does it arrive so much? Or what do you advise me?

0 Karma

adonio
Ultra Champion

if you want, you can move it to frozen on a location of your choice, read here:
https://docs.splunk.com/Documentation/Splunk/6.6.2/Indexer/Setaretirementandarchivingpolicy
as to why is grew so much, did you check if you had erroes / warnings swamping your internal index? did you enable debug as suggested in the answer below? you can review and see it where the spike in data happened by searching timechart count or better:
| tstats count where index = _internal by _time
and try and think what changes at this particular time

0 Karma

fer_tlaloc
New Member

Adonio thanks fr your help, ¡¡ Mi index its OK 🙂

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...