Splunk Search

Why are older events not shown anymore?

floko
Explorer

Dear Community!

Following situation:
I have a couple of indexes which are gathering log events from several heavy forwarders. The forwarders do the parsing and assignment to the different indexes.
Each index on the indexer is configured to grow up to 750 MB and keep the data for 30 days.

However, when I do a search over the last 30 days, I'm missing the oldest events. The last ten days or so are completely empty, while the next couple of days have only a portion of the original data.

This window travels with time. For example, if I do a search of the last 7 days now, I will find every event. If I repeat the same search in three weeks, I won't have any event (or at least, most of them are missing).

I have checked the status of the indexes and they are not full, there is at least 50 MB free, mostly far more.

Can anybody please explain why the data is being removed before the 30 days storage-time have passed?

Thanks in advance!

Tags (1)
0 Karma
1 Solution

harsmarvania57
SplunkTrust
SplunkTrust

Looks like the problem is you are running with default setting of maxDataSize parameter which is auto means 750MB for 64 bit OS, this means that every bucket will have maximum size of 750MB and after that it will roll from hot -> warm.

And you have configured maxTotalDataSizeMB = 750 means total index size will be 750MB only so in that case if only 1 bucket generated in your index with 750MB of data and when another hot bucket created at that time you hit maxTotalDataSizeMB so splunk removed older bucket which contains around 750MB of data so that your old data is removed from splunk.

So you need to configure maxDataSizeproperly so that bucket will roll properly and you will not end up with this situation.

View solution in original post

harsmarvania57
SplunkTrust
SplunkTrust

Looks like the problem is you are running with default setting of maxDataSize parameter which is auto means 750MB for 64 bit OS, this means that every bucket will have maximum size of 750MB and after that it will roll from hot -> warm.

And you have configured maxTotalDataSizeMB = 750 means total index size will be 750MB only so in that case if only 1 bucket generated in your index with 750MB of data and when another hot bucket created at that time you hit maxTotalDataSizeMB so splunk removed older bucket which contains around 750MB of data so that your old data is removed from splunk.

So you need to configure maxDataSizeproperly so that bucket will roll properly and you will not end up with this situation.

Yunagi
Communicator

Could you post your indexes.conf configuration?
What about disk space? Does the partition (where the indexes are located) have enough free space?

0 Karma

floko
Explorer

Hi Yunagi, thank you for the answer!

Disk space is not a problem, only about 18% of the partition are used.
The indexes.conf file contains several entries, but except for the name, they are all exactly the same:

[frq-sip-utg]
coldPath = $SPLUNK_DB/frq-sip-utg/colddb
homePath = $SPLUNK_DB/frq-sip-utg/db
maxTotalDataSizeMB = 750
thawedPath = $SPLUNK_DB/frq-sip-utg/thaweddb
frozenTimePeriodInSecs = 2592000
0 Karma

somesoni2
SplunkTrust
SplunkTrust

The data retention works based on two parameters, event's age (managed by frozenTimePeriodInSecs) and index size (managed by maxTotalDataSizeMB). Since your total index size is small, that may be causing older cold buckets to be rolled and get removed/frozen. Check if you see any bucket rolling activity for your index, with query like this:

index=_internal sourcetype=splunkd component=BucketMover "freeze succeeded" "YourIndexNameHere"

floko
Explorer

@harsmarvania57 Thank you! That was my missing link!
I have now increased the maxTotalDataSizeMB to 5GB, while leaving maxDataSize to its default 750MB. We'll see how it works out in a few days...
It seems I can't flag your answer as an answer to my original question, though.

@somesoni2 Thank you for this valuable Splunk life hack! 🙂
It helped me find another issue I may need fix... looks like the frequency of the roll-overs has much increased a few days ago.

0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...