Alerting

Alert two levels of check - one to check if job has run other to compute count

stanwin
Contributor

hi

I have a alert with multiple checks like below:

1> check if a job has completed ,
2> if Job completed , calculate count of categories and calculate the count difference from today export to four days average.
3> if count difference is less than -10 or >10 alert.

The query for 2 and 3 is ready... with 3 being done as a custom alert condition in the alert definition.
2 is handled by below query.

index=live earliest="-4h" latest=now categoryExport stats dc(category) as count_4h | appendcols [ search index=live earliest=-4d latest=-12h categoryExport  | stats count(category) as count_4d by  date_mday  | eventstats avg(count_4d) as avgCount4d |  eval avgCount4d = round(avgCount4d,0)  ] | eval difference= avgCount4d - count_4h

However I only need to run this alert check when condition 1 is satisfied.. It is as simple as a log that says ' job completed'

I thought of using searchmatch but that doesnt give me a overall summarised condition of whether export has occurred or not..

how do i fit in check condition 1 here.. If i used an AND condition I will lose the data I need for '2'

0 Karma

stanwin
Contributor

I was able to refine it further with existing logic to add that condition as well:

 index=live earliest="-4h" latest=now categoryExport stats dc(category) as count_4h 

| appendcols [ search index=live earliest=-4d latest=-12h categoryExport  | stats count(category) as count_4d by  date_mday  | eventstats avg(count_4d) as avgCount4d |  eval avgCount4d = round(avgCount4d,0)  ] | eval difference= avgCount4d - count_4h
| appendcols [search index=live earliest="-4h" latest=now   source="/wasappdata/logs/*.log" "Call to TaskRunner for" Taxonomy : Complete*| stats count as job_completed_count | eval job_status = if (job_completed_count > 0 , 1, 0) ]

now the job_status can be referenced in the custom alert condition to ONLY send alert when job_status is 1 , i.e. the completed status has been received.

0 Karma

woodcock
Esteemed Legend

Take a look at the general approach used to solve this problem and apply the same thing to yours; it should work just fine. The idea is that you use an earlier (part of the) search to decide whether or not to run a later (part of the) search, all in the same search string. The later part always runs but it is short-circuited to return an error based on the decision of the earlier part, if it is decided that it should not do anything more.

https://answers.splunk.com/answers/261163/is-there-a-way-i-can-schedule-a-saved-search-to-ru.html

0 Karma

stanwin
Contributor

interesting woodcock!

i was able to figure out a logic on same lines as my existing query..

Thanks for your help, its an interesting query. as well to test out the map command..

0 Karma
Get Updates on the Splunk Community!

Take the 2021 Splunk Career Survey for $50 in Amazon Cash

Help us learn about how Splunk has impacted your career by taking the 2021 Splunk Career Survey. Last year’s ...

Using Machine Learning for Hunting Security Threats

WATCH NOW Seeing the exponential hike in global cyber threat spectrum, organizations are now striving more for ...

Observability Newsletter Highlights | March 2023

 March 2023 | Check out the latest and greatestSplunk APM's New Tag Filter ExperienceSplunk APM has updated ...