Splunk Enterprise

Splunk shows only 9 months (270 days) data- How do I increase the retention period?

spodda01da
Path Finder

Hi Everyone,

I got a strange issue and unable to find a fix.

All the indexes have a longer retention period but the oldest data is limited to 270 days. I checked the index cluster but did not find anything which could be causing this issue. Here is the configuration for all indexes:

[example1]
coldPath = volume:primary/example1/colddb
homePath = volume:primary/example1/db
maxTotalDataSizeMB = 512000
thawedPath = $SPLUNK_DB/example1/thaweddb
frozenTimePeriodInSecs=39420043

Checked the index &  Indexers disk space and they are still space left for more data.

Please let me know if anyone have similar experience or suggestion to increase the retention period.

Thanks,

Labels (3)
0 Karma

spodda01da
Path Finder

Hello Everyone,

Regrettably, the oldest available data across all indexes has been reduced to approximately 7 months.

I have already conducted the following checks:

Current index size: Less than 200GB (configured for 500GB)
Indexers Disk Size (Cluster): All indexes currently have 30-35% free space.
frozenTimePeriodInSecs=39420043 (approximately 15 months)

Any assistance with troubleshooting would be greatly appreciated.

Thank you.

 

 

0 Karma

isoutamo
SplunkTrust
SplunkTrust

Hi

you could search a reason for deleting bucket from internal index. You can start with 

index=_internal *cold* *<your index or bucket Id>* 

You will get list of buckets. Select one which are frozen. Then use that bucket id to see the process how and why it has frozen.

r. Ismo

0 Karma

spodda01da
Path Finder

Thank you,

I randomly ran it for a few buckets and observed the following message:

Moving bucket='rb_1681312487_1677890027_1731_FBA51F26-2043-4798-B18D-2D637A7347B9', initiating warm_to_cold: from='/Data/splunkdb/o365/db' to='/Data/splunkdb/o365/colddb', caller='chillIfNeeded', reason='maximum number of warm buckets exceeded'.

I'm not sure if this is the reason that could be affecting the data retention period. Initially, I had "maxHotBuckets = 10" defined, but it's no longer defined, and I've left it as the default value.

[test]
coldPath = volume:primary/test/colddb
homePath = volume:primary/test/db
thawedPath = $SPLUNK_DB/test/thaweddb
maxTotalDataSizeMB = 512000
frozenTimePeriodInSecs = 39420043

0 Karma

spodda01da
Path Finder

To add to the above details, the "thaweddb" folder is blank and doesn't contain any buckets.

For now, I have increased the "frozenTimePeriodInSecs" by a few more months, but I'm not sure if it will work. Any other advice would be very helpful.

0 Karma

spodda01da
Path Finder

I still can't find anything which will lead to a solution. Any suggestion will be of great help!

0 Karma

PickleRick
SplunkTrust
SplunkTrust

The data can be frozen (in your case - deleted if not configured otherwise) in one of three cases:

1) The buckets in the index get too old (most recent event in a bucket is older than the retention period for the index) or

2) The index exceeds the size limit

3) The volume hits the size limit.

So you have to verify if any of those three conditions are met.

Additionally, check your effective configuration with btool. Maybe you're looking in a wrong file for the settings.

0 Karma

GaetanVP
Contributor

Hello @spodda01da,

Did you check that your indexe size does not exceed your maxTotalDataSizeMB value (here 512000 MB) ?

Based on the doc :

https://docs.splunk.com/Documentation/Splunk/9.0.2/admin/Indexesconf

* CAUTION: The 'maxTotalDataSizeMB' size limit can be reached before the time 
  limit defined in 'frozenTimePeriodInSecs'
maxTotalDataSizeMB = <nonnegative integer>
* The maximum size of an index, in megabytes 

To check the size of your index you can use :

du -sch /opt/splunk/var/lib/splunk/<index_name>

 

0 Karma

spodda01da
Path Finder

I did, the size defined is 500GB and most of the indexes are around 300-400GB.

0 Karma

SanjayReddy
SplunkTrust
SplunkTrust

Hi @spodda01da 

Run follwing command to check 
if buckets are deleting before  actual retion days

index=_internal sourcetype=splunkd bucketmover "*will attempt to freeze*"
| eval "Index Last Event"=strftime(now,"%d-%m-%y %H:%M:%S")
| eval "Index First Event"=strftime(latest,"%d-%m-%y %H:%M:%S")
| eval "Actual Data Stored"=round((now-latest)/86400,0)
| eval "Index Rention Days"=frozenTimePeriodInSecs/86400
| table candidate "Index Rention Days" "Actual Data Stored" "Index First Event" "Index Last Event"

 

SanjayReddy_1-1671623016015.png

 

if  "Actual Data Stored"  less than "Index Rention Days" then  data ingestion more on index 
and as menioned by @GaetanVP  maxTotalDataSizeMB has presencede over frozenTimePeriodInSecs.

in that case you may need to increase the disk space 

----
Regards,
Sanjay Reddy

----
If this reply helps you, Karma would be appreciated.

0 Karma

spodda01da
Path Finder

Thanks @SanjayReddy , I ran the script and see following details:

spodda01da_0-1671627354855.png

This issue is not specific to one index but with almost all has oldest data of 270 days.

0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...