I found that I had an error in one of my correlation searches because I saw it in the cloud monitoring console. When I fixed the error I suddenly saw that the latency over this specific correlation search was >4 million seconds. Looking into the actual events that the cloud monitoring console is looking at I see scheduled_time is more than a month ago.
Did I do something dumb or is Splunk actually just trying to run all those failed scheduled tasks now and I just need to wait it out? Or is there a way to stop them from running?
I disabled the correlation search already and did a restart from the server controls....
If the correlation search is set to run in Continuous mode (as opposed to real-time) then, yes, Splunk will attempt to re-run the skipped search intervals. Change to real-time mode to avoid that. See https://docs.splunk.com/Documentation/ES/7.1.2/Admin/Configurecorrelationsearches#Change_correlation...for more information.
If the correlation search is set to run in Continuous mode (as opposed to real-time) then, yes, Splunk will attempt to re-run the skipped search intervals. Change to real-time mode to avoid that. See https://docs.splunk.com/Documentation/ES/7.1.2/Admin/Configurecorrelationsearches#Change_correlation...for more information.
That does indeed answer the question on: What is going on, thanks.
Any idea how I could stop it from trying to run an insane amount of searches? Or should I just wait? (Splunk Cloud btw, so can't ssh in and do things.... already restarted from the server settings GUI part)
As mentioned, try changing the CS from continuous to real-time.
Ah sorry I thought you meant that could have prevented this. I tried changing it to real-time but it keeps going through all the scheduled searches....
At least it seems we are already arriving at October 12th so I guess it is almost finished and I can go normally again tomorrow. It just seems like a very weird thing, I'll email my account managers on it to request what Splunk themselves know about this.