have you looked in the splunk logs for anything?
have you restarted splunk? - does it complain about dirty indexes
is this a historical problem? or recent one? - any changes around the time of occurrence?
Hi MHibbin, Nothing in the splunk logs and splunk has restarted. I think this maybe historical, unfortunately this is something I've inherited so it's hard for me to say when it started.
Check Manager>Indexes to verify you have data older than 30 days.
You can also use an epoch timestamp converter to check the time stamps on the buckets in your indexes. The buckets are named with the following format: var/lib/splunk/defaultdb/db/dboldestEventnewestEvent_uniqueID
If you don't have any data that is older than 30 days then check for the attribute frozenTimePeriodInSecs in your indexes.conf. This is typically found in the etc/system/default or local directories, but it might be configured in any app default or local directories. This can be set as a default and on an index per index basis.
Hi Luke, Thanks for that.. I think I found the answer. The non internal indexes go back to 2010 in cold, but some of the dashboards in use, use the _internal index which is not in cold and only has 30 days of history. Looking in /opt/splunk/etc/system/default/indexes.conf, the frozenTimePeriodInSecs is set to 2419200 which is 28 days.