I have a number of real-time alerts scheduled that prior to upgrading to Splunk 6.1 would run continuously. Since the upgrade these jobs now stop alerting even though the jobs are visible in the Activity/Jobs window and are in status "Running 100%".
To get the jobs to start alerting again I have to either delete and recreate them.
Is this a known issue or have I missed a breaking change somewhere in the upgrade?
this is appears to be a known issue in 6.1.1:
"After upgrading to 6.1 or 6.1.1, real-time searches (per-result or rolling window) may stop triggering alerts for matching events after running for more than 1 hour. Typically, this is noticed when these searches fail to trigger actions such as sending an email. (SPL-84357)"
http://docs.splunk.com/Documentation/Splunk/6.1.1/ReleaseNotes/KnownIssues
this is appears to be a known issue in 6.1.1:
"After upgrading to 6.1 or 6.1.1, real-time searches (per-result or rolling window) may stop triggering alerts for matching events after running for more than 1 hour. Typically, this is noticed when these searches fail to trigger actions such as sending an email. (SPL-84357)"
http://docs.splunk.com/Documentation/Splunk/6.1.1/ReleaseNotes/KnownIssues
I've noticed the same problem. We just upgraded from 6.03 to 6.1.1. We have 7 realtime jobs so I wouldn't think that would overload the system.