I want to create a Splunk alert for Cron job it will trigger an alert when cron job is not successful or not ran? Any one can help me on this.
This is twice you've asked me if what you've done is correct. As I said last time, if you get the results you want then the search is correct.
I did not see any sample images.
Okay Rich,
Here I need to write one more query with no logs condition.
Actually It For the full index jobs we can dramatically simplify the query. Since the job only runs once a day, we just need a simple query (something like sourcetype=hybris_console "full-wcpindex*-cronjob"
) that runs once a day, and alerts if there are no logs. The gold plated solution would be one query that looks for all of the jobs and alerts if any of them are 0.
Finding something that is not there can be challenging, but can be done. It's probably best done in a separate thread.
This is the "skipped search" problem. Configure a Monitoring Console
and run the Health Checks
. Look at the results of the Skipped Searches
one. You can steal this search and use it as an alert.
Hi Woodcock,
Can you please help me in writing a query of above condition I have mentioned in Burwell Comments.
I cannot because you did not answer her VERY IMPORTANT question. What do you mean exactly by cron job is not successful
? Are you talking about looking at splunk saved searches (which are scheduled with cron job syntax)? Or are you talking about actual cron jobs running on a *NIX server? If the former, you already have the answers. If the latter, start with what @richgalloway said in his answer. But first: answer the question!
Hi Woodcock,
Apologies for the same, I thought I am following with Rich and he was answering my query as well. I think I have answered every question that Rich has asked.
Now related to your quest basically Woodcock, Cron job is running on server which is neither a splunk saved search and not .NIX , Yeah it is trues the server is linux but It is realted to Hybris cron job which runs everyday and will produce some logs in the /opt/sap/hybris/logs/console.log and I need to make my search if the job failed it should trigger an alert.
So I Created a query that seems to be very complex the same I was same discussing with Rich as well. I need one more query which should be very sample. if it produce no logs related to cronjob it should trigger an alert. Even I have discussed with Rich as well there.
and The doubt I was following with @richgalloway was little different so I asked you guys with different query here.
Also I would be very thankful to @richgalloway for his continuous response. I would really appreciate your help @richgalloway.
Hi @gpunjabi
Are you asking about scheduled jobs in Splunk and finding out the status?
If so..
index=_internal source="/opt/splunk/var/log/splunk/scheduler.log" status!=delegated* status!=success
| stats count by savedsearch_name,status
Will show you failed scheduled jobs.
If you have a search head cluster with a deployer, you can get a lot more information about scheduled jobs including the reason for your jobs, I use the Monitoring Console. I wrote about using it here: https://answers.splunk.com/answers/514181/skipped-searches-on-shc.html
You can find skipped jobs, long running jobs etc.
Hi Burwell,
Sorry I didn't answer your question, apologies for the same I haven't clearly described my question, I thought Rich was following up,
So my question was related cron job is running on server which is neither a splunk saved search and not .NIX server , It is true the server is linux but It is realted to Hybris cron job which runs everyday and will produce some logs in the /opt/sap/hybris/logs/console.log and I need to make my search as if the job failed it should trigger an alert.