Splunk Search

help with multiple search in one SPL needed

damucka
Builder

Hello,

I have the following search:

index=_internal sourcetype=scheduler savedsearch_name="Anomaly Detection - new-" earliest=-5m latest=now
| convert ctime(scheduled_time) as SCHEDULE
| convert ctime(dispatch_time) as DISPATCH 
| stats sum(result_count) as result_count_sum 
| eval trigger=case(result_count_sum=0, "do_not_trigger")

As this is just the auxiliary search in order to play with the throttling, I would like to have it that way that it sets the "trigger" variable but that it does not return any results itself, otherwise my alert will react on that and fire up based on the number of results.

How would I do this?

Kind regards,
Kamil

Tags (1)
0 Karma

DavidHourani
Super Champion

Hi @damucka,

You can set your alert to trigger on a condition and define that condition as the sum + give it a threshold.

Have a look here :
https://docs.splunk.com/Documentation/Splunk/latest/Alert/AlertTriggerConditions#How_searches_and_tr...
There's an applied example to count but you can replace that by a sum, works the same way.
In your case your whole search can be replaced by :

index=_internal sourcetype=scheduler savedsearch_name="Anomaly Detection - new-" earliest=-5m latest=now
 | stats sum(result_count) as result_count_sum

And the condition can be applied on result_count_sum

Let me know how that works out for you.

Cheers,
David

0 Karma

damucka
Builder

Hello David,

Thank you.
Actually the original search behind my alert is:

| noop search_optimization=false| dbxquery query="call \"ML\".\"ML.PROCEDURES::PR_ALERT_TYPE_ALL_HOSTS\"('BWP', to_timestamp(to_nvarchar(now(), 'YYYY-MM-DD HH24:MI'),'YYYY-MM-DD HH24:MI'), ?)" connection="HANA_MLBSO"

This would return 3 lines back from the DB table based on that if the anomaly was encountered or not.
Then, I would like to add logic that checks the last 5 calls of this alert and in case there were no results (no anomaly) in the last 5 minutes, only then the next alert gets triggered. That is how I came to the following SPL, which is the subject of the question here:

index=_internal sourcetype=scheduler savedsearch_name="Anomaly Detection - new-" earliest=-5m latest=now
| convert ctime(scheduled_time) as SCHEDULE
| convert ctime(dispatch_time) as DISPATCH 
| stats sum(result_count) as resultcount
| eval trigger=case(resultcount>0, "do_not_trigger")

So, now what I would need is:
1/
To find the way how to put both searches together into one alert that the
- trigger is set
- trigger does not appear in the result, because I do not want to present it: just the output from the actual search

2/
Define the custom trigger condition
- trigger condition being met
- number of results from the actual search > 0

Could you help to achieve this?

Kind Regards,
Kamil

0 Karma

DavidHourani
Super Champion

Hi Kamil,

Sure, this is possible but might run slowly depending on how much time your dbxquery is taking. The logic is like this :
1- Run your first search.
2- use the appendcols command to include the resultcount field from the second search to the first one https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchReference/Appendcols
3- trigger using the resultcount field and use the original data for what you need to display in the alert.
Let me know if that makes sense to you.

0 Karma

damucka
Builder

Hello David,

Based on your reply I created sth like that:

| noop search_optimization=false| dbxquery query="call \"ML\".\"ML.PROCEDURES::PR_ALERT_TYPE_ALL_HOSTS\"('BWP', to_timestamp(to_nvarchar('2019-05-28 16:39:05', 'YYYY-MM-DD HH24:MI'),'YYYY-MM-DD HH24:MI'), ?)" connection="HANA_MLBSO"

| appendcols [| noop search_optimization=false| dbxquery query="call \"ML\".\"ML.PROCEDURES::PR_ALERT_TYPE_ALL_HOSTS\"('BWP', to_timestamp(to_nvarchar('2019-05-28 16:39:05', 'YYYY-MM-DD HH24:MI'),'YYYY-MM-DD HH24:MI'), ?)" connection="HANA_MLBSO"| stats count as rows | eval totalCount=rows]

| appendcols [search index=_internal sourcetype=scheduler savedsearch_name="Anomaly Detection - new-" earliest=-5m latest=now
| convert ctime(scheduled_time) as SCHEDULE
| convert ctime(dispatch_time) as DISPATCH 
| stats sum(result_count) as resultcount
| eval trigger=case(resultcount=0, "1",1<2,"0")]

| rename rows AS _rows
| rename totalCount AS _totalCount
| rename resultcount AS _resultcount
| rename trigger AS _trigger

This works, but unfortunately I am firing the original search two times.
The reason for the first appendcols is "just" to get the count of the events into the totalCount, because this should also be the condition for the custom alert triggering together with the trigger. Both, _trigger=1 and _totalCount > 0 should be there together.
Now, I would hope there is some easier way to get the event count than firing the search again, unfortunately I was not able to find it. When I do it with the appendpipe+stats count then it does not work.

Would you have idea how to solve this?

Kind Regards,
Kamil

0 Karma

DavidHourani
Super Champion

Yes there is ! 🙂 Instead of running the search again use eventstats to get the count. Something like ...| eventstats count should do the trick 🙂
Let me know if it helps and please upvote and accept if it works for you 🙂

0 Karma

kamlesh_vaghela
SplunkTrust
SplunkTrust

@damucka
As you have mentioned earliest=-5m latest=now, your search will return the result if "Anomaly Detection - new-" savedsearch executing every 5 min OR less.

Can you please confirm it?

0 Karma

damucka
Builder

Yes, the above search will always return results because the alert is running per minute base. The idea is that I take the result_count column, sum it and set the "trigger" correspondingly. The question is if there is any way to set tr trigger based on the sum, but do not produce any other "output" / do not bring back any results ...

0 Karma

richgalloway
SplunkTrust
SplunkTrust

Because of the stats command, the search will return a single event with one field: result_count_sum The following eval adds the trigger field to the same event.

BTW, the convert commands are not needed since the SCHEDULE and DISPATCH fields are never used.

---
If this reply helps you, Karma would be appreciated.
0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...