Splunk Search

Alert: set trigger condition on 1st search query and show the stats from 2nd query in the alert email

afulamba
Explorer

Hello Splunkers,
This is my 1st post on this forum, I need some help here.
I have to set up a alert which has 2 search queries. 1st query decides the trigger condition and the alert email will have the stats/table from the 2nd query. Is it doable?
e.g.
My 1st search:
index=xxx sourcetype=X
| eventstats count(eval(Total_time>5000 AND Total_time<10000)) as T5 count(eval(Total_time>=10000)) as T10
| where T5>40 OR T10>20

I have to trigger the alert if T5>40 or T10>20

In the alert I have to show the stats from 2nd query on different index.

2nd query:

index=YYY sourcetype = Y
:
:
|table fld1 fld2 fld3

If required can I add 3rd query as well to show the table?

Regards,
Amit

Tags (1)
0 Karma
1 Solution

DalJeanis
Legend

You haven't explained what the connection is between the first and second query. There is literally no link. There are various ways to accomplish this, but the right way depends on that relationship.

If you do not need the event-level data from the first query, then you should use stats instead of eventstats. Eventstats retains all the individual events, whereas stats keeps only the summary data.


This assumes there is literally no relationship, and that you don't need the information from the first search, just the second search:

 index=YYY sourcetype = Y
 ... your other search stuff ...
 | table fld1 fld2 fld3

 | rename COMMENT as "execute the alert search and set field keepme to zero if there are no alerts"
 | eval keepme = [ search 
    index=xxx sourcetype=X 
    | stats count(eval(Total_time>5000 AND Total_time<10000)) as T5 
            count(eval(Total_time>=10000)) as T10 
    | where T5>40 OR T10>20
    | stats count as search
    ]

 | rename COMMENT as "kill all the records if keepme is zero"
 | where keepme>0
 | fields - keepme

This assumes the first search (the inside search) returns a field "foo" and only the records from the second (outside) search matching a valid foo should be alerted on. The join handles eliminating records that don't qualify:

 index=YYY sourcetype = Y
 ... your other search stuff ...
 | table foo fld1 fld2 fld3

 | rename COMMENT as "execute the alert search and set field keepme to zero if there are no alerts"
 | join foo [ search 
    index=xxx sourcetype=X 
    | stats count(eval(Total_time>5000 AND Total_time<10000)) as T5 
            count(eval(Total_time>=10000)) as T10 by foo 
    | where T5>40 OR T10>20
    | table foo 
    | eval keepme = "keepme"
    ]

View solution in original post

0 Karma

DalJeanis
Legend

You haven't explained what the connection is between the first and second query. There is literally no link. There are various ways to accomplish this, but the right way depends on that relationship.

If you do not need the event-level data from the first query, then you should use stats instead of eventstats. Eventstats retains all the individual events, whereas stats keeps only the summary data.


This assumes there is literally no relationship, and that you don't need the information from the first search, just the second search:

 index=YYY sourcetype = Y
 ... your other search stuff ...
 | table fld1 fld2 fld3

 | rename COMMENT as "execute the alert search and set field keepme to zero if there are no alerts"
 | eval keepme = [ search 
    index=xxx sourcetype=X 
    | stats count(eval(Total_time>5000 AND Total_time<10000)) as T5 
            count(eval(Total_time>=10000)) as T10 
    | where T5>40 OR T10>20
    | stats count as search
    ]

 | rename COMMENT as "kill all the records if keepme is zero"
 | where keepme>0
 | fields - keepme

This assumes the first search (the inside search) returns a field "foo" and only the records from the second (outside) search matching a valid foo should be alerted on. The join handles eliminating records that don't qualify:

 index=YYY sourcetype = Y
 ... your other search stuff ...
 | table foo fld1 fld2 fld3

 | rename COMMENT as "execute the alert search and set field keepme to zero if there are no alerts"
 | join foo [ search 
    index=xxx sourcetype=X 
    | stats count(eval(Total_time>5000 AND Total_time<10000)) as T5 
            count(eval(Total_time>=10000)) as T10 by foo 
    | where T5>40 OR T10>20
    | table foo 
    | eval keepme = "keepme"
    ]
0 Karma

afulamba
Explorer

Hi DalJeanis,
There is no direct relation b/w 2 query in the query. However, the value I get in T5 and T10 alerts me the issue and the 2nd query shows the data from the other supporting system. So when the alerts fire I want to show in the alert email itself whether the issue is from backend support system or not.
To your question on eventstats, yes I also can use stats that also returns me the number of events, that I only need and not entire data.
The first query is only build the trigger condition and based on that I have put the data from 2nd query in the alert.
I will go though the query you provided and update the post.
Thank you!!

DalJeanis
Legend

@afulamba - then use the first query I gave you.

0 Karma

afulamba
Explorer

Thank you :-). It helps.

0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...