Hi ,
I have a scheduled alert which runs every 5 minutes and it was working perfectly (triggered e-mail) till last week. Not sure what happened. It stopped working from yesterday. We have had the error message in the log, but Splunk didn't trigger the alert for this one. When I checked the Splunk scheduler log, I see the below error message.
ERROR SavedSplunker - savedsearch_id="nobody;search;Hungalert", message="Unable to read the job status.". No actions executed
Please let me know the reason for this issue and how to avoid this in future.
Thanks!
Check your index=_internal log_level=warn* OR log_level=err*
for any other error messages. I just helped someone over here with same issue:
https://answers.splunk.com/answering/400691/view.html
It may also be possible that this is caused by having too many concurrent searches, and a lot of other possible causes too. It's good to look for other errors in the logs and correct them all.
Hi ,
I don't see any related errors in the splunkd.log using the below search.
index=_internal log_level=warn* OR log_level=err*
When I checked the scheduler.log, I see that job is getting triggered as per the schedule time, but it is not picking up the results. I see in the scheduler.log for that alert, result_count=0.
How to find in case if this is related to concurrent search. is there any other log we have?
You don't see any RELATED errors or don't see ANY errors? Try to fix ALL errors you have starting with the most common error you see.
Hi,
What version of splunk are you using?
I am using splunk 6.2
Can anyone please help me on this?