Monitoring Splunk

Abruptly missing old data

bloizides
Observer

We have been collecting syslog data on our hosts for the past 5 years or so. Syslog is in our 'main' index, along with other events.  We performed a query today and noticed that all of our data in 'main', and a few other indexes, prior to late-2018 are gone. *poof*   Looking at our monitoring graphs, it looks like our disk space usage plummeted back in March, which is probably when this happened. 

I have no idea where the data went. The time of the drop in disk usage does not correspond to any upgrade or maintenance during that period. 

Can anyone offer suggestions on how to troubleshoot this? Is it possible that, for some reason, Splunk rolled it over to frozen? I'm grasping. Any suggestions would be appreciated. 

Thanks!

Labels (2)
0 Karma
Get Updates on the Splunk Community!

Finding Based Detections General Availability

Overview  We’ve come a long way, folks, but here in Enterprise Security 8.4 I’m happy to announce Finding ...

Get Your Hands Dirty (and Your Shoes Comfy): The Splunk Experience

Hands-On Learning and Technical Seminars  Sometimes, you just need to see the code. For those looking for a ...

What’s New in Splunk Observability Cloud: January Feature Highlights & Deep Dives

Splunk Observability Cloud continues to evolve, empowering engineering and operations teams with advanced ...