Splunk Search

How to limit the triggering of a Splunk alert according to a time range to avoid having several similar results?

elmadi_fares
Loves-to-Learn Everything

I have a problem triggering an alert on a splunk request based on a cron job that runs this way:

elmadi_fares_0-1662389358937.png

Search query:

index=pdx_pfmseur0_fxs_event sourcetype=st_xfmseur0_fxs_event
| eval
trackingid=mvindex('DOC.doc_keylist.doc_key.key_val',mvfind('DOC.doc_keylist.doc_key.key_name', "MCH-TrackingID"))
| rename gxsevent.gpstatusruletracking.eventtype as events_found
| rename file.receiveraddress as receiveraddress
| rename file.aprf as AJRF
| table trackingid events_found source receiveraddress AJRF
| stats values(trackingid) as trackingid, values(events_found) as events_found, values(receiveraddress) as receiveraddress, values(AJRF) as AJRF by source
| stats values(events_found) as events_found, values(receiveraddress) as receiveraddress, values(AJRF) as AJRF by trackingid
| search AJRF=ORDERS2 OR AJRF=ORDERS1 | stats count as total | appendcols [search index= idx_pk8seur2_logs sourcetype="kube:container:8wj-order-service" processType=avro-order-create JPABS | stats dc(nativeId) as rush ] | appendcols [search index= idx_pk8seur2_logs sourcetype="kube:container:9wj-order-avro-consumer" flowName=9wj-order-avro-consumer customer="AB" (message="HBKK" OR message="MANU") | stats count as hbkk] | eval gap = total-hbkk-rush | table gap, total, rush
| eval status=if(gap>0, "OK", "KO")
| eval ressource="FME-FME-R:AB"
| eval service_offring="FME-FME-R"
| eval description="JPEDI - Customer AB has an Order Gap \n \nDetail : JPEDI - Customer AB has an Order Gap is now :" + gap + "\n\n\n\n;support_group=AL-XX-MAI-L2;KB=KB0078557"
| table ressource description gap total  rush  description service_offringe_offring
​

cronjob make on this alerte
elmadi_fares_1-1662390004374.png

 

I received three alerts containing the same result according to cron job
 
17H50 18H50 21H50 with same result of gap=9
elmadi_fares_2-1662390239291.png

 


is there a solution to limit the alert triggering just for once for each time interval from 08:50 => 10:50
from 10:50 a.m. => 3:50 p.m.
from 3:50 p.m. to 9:50 p.m.
Labels (1)
0 Karma

ITWhisperer
SplunkTrust
SplunkTrust

Your cron expression determines when the report is executed, not the period it covers - in your scenario, the report will run at 50 minutes past the hour for the hours 8am to 9pm, i.e. 8:50 to 21:50. You should then look at throttling of the alert. You may need to have 3 reports, one for each period, so that a new throttle kicks in for each period.

0 Karma

elmadi_fares
Loves-to-Learn Everything

yes i need to have 3 reports

0 Karma

elmadi_fares
Loves-to-Learn Everything

I believe to reduce the frequency of triggering alerts I have to configure a period during which I delete the results??

0 Karma
Get Updates on the Splunk Community!

Index This | I’m short for "configuration file.” What am I?

May 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with a Special ...

New Articles from Academic Learning Partners, Help Expand Lantern’s Use Case Library, ...

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...

Your Guide to SPL2 at .conf24!

So, you’re headed to .conf24? You’re in for a good time. Las Vegas weather is just *chef’s kiss* beautiful in ...