Hello everyone, I've just encountered a very unusual incident that I have not seen before in Splunk. I have an alert set up for the following query: index=apps host=pprd*rctrl* OutOfMemoryError startminutesago=5 It has been running perfectly well for months, along with the 80+ other alerts I have defined. Last night at 12:30:02 AM, we received an OutOfMemoryError that was properly captured and relayed to me. I resolved the issue by restarting the affected service. Problem solved, back to bed. Today at exactly 12:30:02 PM, the same alert was sent again for the exact same log entry that occurred at 12:30:02 AM this morning! I have checked the entire Splunk server cluster and the server instance and all have identical (and accurate) date and time settings. The Splunk Forwarder shows no errors and is running properly. There are no new problems on the server instance, and there hasnt even been a new log entry since about 3:00 AM. I have no idea why this happened, or how it could happen. Any ideas on what's going on?
... View more