Splunk Search

Why are older events not shown anymore?

floko
Explorer

Dear Community!

Following situation:
I have a couple of indexes which are gathering log events from several heavy forwarders. The forwarders do the parsing and assignment to the different indexes.
Each index on the indexer is configured to grow up to 750 MB and keep the data for 30 days.

However, when I do a search over the last 30 days, I'm missing the oldest events. The last ten days or so are completely empty, while the next couple of days have only a portion of the original data.

This window travels with time. For example, if I do a search of the last 7 days now, I will find every event. If I repeat the same search in three weeks, I won't have any event (or at least, most of them are missing).

I have checked the status of the indexes and they are not full, there is at least 50 MB free, mostly far more.

Can anybody please explain why the data is being removed before the 30 days storage-time have passed?

Thanks in advance!

Tags (1)
0 Karma
1 Solution

harsmarvania57
Ultra Champion

Looks like the problem is you are running with default setting of maxDataSize parameter which is auto means 750MB for 64 bit OS, this means that every bucket will have maximum size of 750MB and after that it will roll from hot -> warm.

And you have configured maxTotalDataSizeMB = 750 means total index size will be 750MB only so in that case if only 1 bucket generated in your index with 750MB of data and when another hot bucket created at that time you hit maxTotalDataSizeMB so splunk removed older bucket which contains around 750MB of data so that your old data is removed from splunk.

So you need to configure maxDataSizeproperly so that bucket will roll properly and you will not end up with this situation.

View solution in original post

harsmarvania57
Ultra Champion

Looks like the problem is you are running with default setting of maxDataSize parameter which is auto means 750MB for 64 bit OS, this means that every bucket will have maximum size of 750MB and after that it will roll from hot -> warm.

And you have configured maxTotalDataSizeMB = 750 means total index size will be 750MB only so in that case if only 1 bucket generated in your index with 750MB of data and when another hot bucket created at that time you hit maxTotalDataSizeMB so splunk removed older bucket which contains around 750MB of data so that your old data is removed from splunk.

So you need to configure maxDataSizeproperly so that bucket will roll properly and you will not end up with this situation.

Yunagi
Communicator

Could you post your indexes.conf configuration?
What about disk space? Does the partition (where the indexes are located) have enough free space?

0 Karma

floko
Explorer

Hi Yunagi, thank you for the answer!

Disk space is not a problem, only about 18% of the partition are used.
The indexes.conf file contains several entries, but except for the name, they are all exactly the same:

[frq-sip-utg]
coldPath = $SPLUNK_DB/frq-sip-utg/colddb
homePath = $SPLUNK_DB/frq-sip-utg/db
maxTotalDataSizeMB = 750
thawedPath = $SPLUNK_DB/frq-sip-utg/thaweddb
frozenTimePeriodInSecs = 2592000
0 Karma

somesoni2
Revered Legend

The data retention works based on two parameters, event's age (managed by frozenTimePeriodInSecs) and index size (managed by maxTotalDataSizeMB). Since your total index size is small, that may be causing older cold buckets to be rolled and get removed/frozen. Check if you see any bucket rolling activity for your index, with query like this:

index=_internal sourcetype=splunkd component=BucketMover "freeze succeeded" "YourIndexNameHere"

floko
Explorer

@harsmarvania57 Thank you! That was my missing link!
I have now increased the maxTotalDataSizeMB to 5GB, while leaving maxDataSize to its default 750MB. We'll see how it works out in a few days...
It seems I can't flag your answer as an answer to my original question, though.

@somesoni2 Thank you for this valuable Splunk life hack! 🙂
It helped me find another issue I may need fix... looks like the frequency of the roll-overs has much increased a few days ago.

0 Karma
Get Updates on the Splunk Community!

Introducing the 2024 Splunk MVPs!

We are excited to announce the 2024 cohort of the Splunk MVP program. Splunk MVPs are passionate members of ...

Splunk Custom Visualizations App End of Life

The Splunk Custom Visualizations apps End of Life for SimpleXML will reach end of support on Dec 21, 2024, ...

Introducing Splunk Enterprise 9.2

WATCH HERE! Watch this Tech Talk to learn about the latest features and enhancements shipped in the new Splunk ...