We were using Splunk Enterprise for last few years. And recently by May 2019 we have migrated all the data from all index from Splunk enterprise to Splunk Cloud (i.e We have reconfigured the data in Splunk Cloud).
My query is that we have just started migrated the data only during May 2019 but if we search the data for 2015, 2016 ,2017 and so on i can able to see the events in Splunk Cloud for few of the index.
The default retention is 90 days. But how come it holds the data which are very old that is even i can able to see the data from 2013 as well. So how the bucketing system works for Splunk Cloud.
And simultaneously when I have searched the configured data after 90 days as per the retention policy i cant able to see the logs searchable after 90 days but still how come it holds the old data?
Do we have any architecture diagram explaining the mechanism or how it works.
Kindly help to check and update on the same.
Default retention on Splunk cloud is 90 days but your organisation may have purchased additional storage if you need to retain data for longer.
You can however configure the "searchable" timeframe yourself which you can reduce or increase depending on your needs, but they are not the same thing.
Thanks for your response.
But my query is that we have started ingesting the data into Splunk Cloud only by May 2019 but when we search the data for last few years before i can still find some events might be 2013 or 2014?
How it is possible.
Before we are using Splunk Enterprise then by May 2019 we have purchased Splunk Cloud subscription and we started re configuring freshly the data into Splunk Cloud. But when i search the data for all time i can see events for few indexes for 2013 and 2014 also.
Not sure how it picks up and how it works.
Is it possible you have deployed a Hybrid environment?
This allows you to use an on premise search head to search data from both on-prem and SplunkCloud?