Getting Data In

retention policy

jbates58
Observer

Hi All,

I have tried looking over the documentation for this, but I am super confused. And really struggling to wrap my head around this.

I have an environment where Splunk is ingesting syslog from 2 firewalls. The logs are only audit / management related, and these need to be sent to a sperate server for compliance (hence splunk).

I  want to configure a retention policy where this data is deleted after 1 year, as that is the specific requirement.

From what i can tell, i just need to add the "frozentimeinseconds" line to the index conf file for the "main" index (as this is where the events are going)

Current ingestion is ~150,000 events per day. And daily ingestion is ~30-35MB.However, this is subject to change in the future as more firewalls come online etc..

There is plenty of storage available. However the requirement is just 1 year of searchable data.

But I keep seeing things about hot/warm/cold/frozen etc.. and i just dont get it. All thats needed is 1 year of searchable data, anything older than (time.now() - 365 days) can be deleted.

 

Can someone please assist me with what i need to do to make this work 🙂

Labels (3)
0 Karma

isoutamo
SplunkTrust
SplunkTrust

Hi

getting exact retention time for e.g. 1y in splunk could be almost mission impossible 😞

There is several parameters how splunk define when it removes those buckets, which has all events older than your defined retention time! You must understand than when splunk calculate retention in reality it's for all events in bucket! It's not event based, as a smallest storage unit is a bucket not an event. Practically this means that splunk can remove bucket, when all events in that bucket has older than your defined retention.

In your case, you have quite low event volume, which means that you could have one bucket, which contains events from several months max(15GB divide 30MB divided by #hot buckets for that index ). Usually you have several (default is 3) active hot buckets (per search peer) at same time, where splunk can write new events. Default for keeping a bucket as hot is 90d or when it's come full or when you restart splunk. There are also some other parameters which could affect this!

Here is some links where you could learn more how this is actually working:

r. Ismo

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @jbates58 ,

at first on the server containin the indexes, don't use the main index, but create a custom index (e.g. firewalls)

then for this new index define the retention you want (one year).

Then assign the new index name to the inputs that you should have on your Forwarders.

At least, when you'll ingest more logs, you should monitor your index to undertand if the dimension you configured is correct or if you need to enlarge it.

Ciao.

Giuseppe

0 Karma

inventsekar
SplunkTrust
SplunkTrust

Hi @jbates58 Yes, at times the retention policy may give difficult times. 

in DMC Server, Pls check this...  Settings > Monitoring Console > Indexing > Indexes and Volumes > Index Detail: Instance

Splunk-retention-policy.jpg

EDIT - Pls check the docs at https://docs.splunk.com/Documentation/Splu nk/9.1.2/Admin/Indexesconf

one thing to remember - frozenTimePeriodInSecs vs maxTotalDataSizeMB - can give confusion as well (i remember whichever comes first will work and take precedence over the other)

 

0 Karma

jbates58
Observer

Here is the contents of that page. I have redacted out a little bit of info relating to the environment.

 

jbates58_0-1705284047904.png

 

jbates58_1-1705284129840.png

 

0 Karma
Got questions? Get answers!

Join the Splunk Community Slack to learn, troubleshoot, and make connections with fellow Splunk practitioners in real time!

Meet up IRL or virtually!

Join Splunk User Groups to connect and learn in-person by region or remotely by topic or industry.

Get Updates on the Splunk Community!

Index This | What travels the world but is also stuck in place?

April 2026 Edition  Hayyy Splunk Education Enthusiasts and the Eternally Curious!   We’re back with this ...

Discover New Use Cases: Unlock Greater Value from Your Existing Splunk Data

Realizing the full potential of your Splunk investment requires more than just understanding current usage; it ...

Continue Your Journey: Join Session 2 of the Data Management and Federation Bootcamp ...

As data volumes continue to grow and environments become more distributed, managing and optimizing data ...