Splunk Enterprise

Frozen Time Period in Splunk Indexer

Chirag812
Loves-to-Learn

I am using below query to get the index sizes and consumed space and frozenTimePeriodInSecs details.

| rest /services/data/indexes splunk_server="ABC"
| stats min(minTime) as MINUTC max(maxTime) as MAXUTC max(totalEventCount) as MaxEvents max(currentDBSizeMB) as CurrentMB max(maxTotalDataSizeMB) as MaxMB max(frozenTimePeriodInSecs) as frozenTimePeriodInSecs by title
| eval MBDiff=MaxMB-CurrentMB
| eval MINTIME=strptime(MINUTC,"%FT%T%z")
| eval MAXTIME=strptime(MAXUTC,"%FT%T%z")
| eval MINUTC=strftime(MINTIME,"%F %T")
| eval MAXUTC=strftime(MAXTIME,"%F %T")
| eval DAYS_AGO=round((MAXTIME-MINTIME)/86400,2)
| eval YRS_AGO=round(DAYS_AGO/365.2425,2)
| eval frozenTimePeriodInDAYS=round(frozenTimePeriodInSecs/86400,2)
| eval DAYS_LEFT=frozenTimePeriodInDAYS-DAYS_AGO
| rename frozenTimePeriodInDAYS as frznTimeDAYS
| table title MINUTC MAXUTC frznTimeDAYS DAYS_LEFT DAYS_AGO YRS_AGO MaxEvents CurrentMB MaxMB MBDiff
title MINUTC MAXUTC frznTimeDAYS DAYS_LEFT DAYS_AGO YRS_AGO MaxEvents CurrentMB MaxMB MBDiff
XYZ 24-06-2018 01:24 10-02-2024 21:11 62 -1995.87 2057.87 5.63 13115066 6463 8192 1729

 

For index 'XYZ' I can see frozenTimePeriod are showing 62 days so as per the set condition it should just show last 2 months of data but my MINTIME is still showing very old date as '24-06-2018 01:24'. When I checked the event counts in Splunk for older than 62 days then it shows very few counts compare to past 62 days events counts. (Current events counts are very high)

So why still these older events are showing in Splunk and also why very few not all). I want to understand this scenario to increase the frozentime period.

Labels (1)
0 Karma

scelikok
SplunkTrust
SplunkTrust

Hi @Chirag812,

Splunk manages retention on a per bucket basis. This means to freeze a bucket, the newest data in a particular bucket must be older than frozenTimePeriodInSecs.

Normally all data have close timestamps in a bucket. But if some of your sources send data using old timestamps, these data will go into the same bucket as the recent timestamps. This makes the bucket's oldest timestamp older than the others. That is why you see the above situations. Unfortunately, there is no method to fix this error until the newest data is older than frozenTimePeriodInSecs.

To prevent this behavior in the future, you can check your data sources for problems below.

- Always use healthy NTP servers for all your data sources to be sure they have correct timestamps

- Check timestamp extraction problems and use TIME_PREFIX and TIME_FORMAT  settings to prevent getting the wrong part of the log as a timestamp. If there are some epoch-like patterns in your data Splunk could use this as a timestamp.

You can use the below query to see the wrong timestamped events to fix.

index=ABC earliest=1 latest=-63d

 

If this reply helps you an upvote and "Accept as Solution" is appreciated.
0 Karma
Get Updates on the Splunk Community!

Troubleshooting the OpenTelemetry Collector

  In this tech talk, you’ll learn how to troubleshoot the OpenTelemetry collector - from checking the ...

Adoption of Infrastructure Monitoring at Splunk

  Splunk's Growth Engineering team showcases one of their first Splunk product adoption-Splunk Infrastructure ...

Modern way of developing distributed application using OTel

Recently, I had the opportunity to work on a complex microservice using Spring boot and Quarkus to develop a ...