Splunk Search

How to trigger an alert if the events are huge, the alert is triggering before the search completed. please help

andy11
Observer

I'm using a query which returns entire day data :

 

 

 

index="index_name" source="source_name"

 

 

 

 And this search provides me above 10 millions of huge events.

So my requirement is if the data gets reduced below 10m i should receive an alert.

But when this alert is triggering then this entire search is not getting completed because it's taking lots of time and before that only the alert triggering every time.

So is there any way that i can trigger this alert after the search completed completely.

Labels (1)
Tags (2)
0 Karma

PickleRick
SplunkTrust
SplunkTrust

Ok, but what is the goal of your alert? If you just want to know whether you have less than 10Mevents you chose the worst possible way to do so. Why fetching all events if you only want their count?

index=whatever source=something | stats count

Is much much better.

And if you use only indexed fields (or index name which technically isn't an indexed field but we can assume it is for the sake of this argument) which you do you can even do it lightning-fast as

| tstats count WHERE index=whatever source=something

andy11
Observer

Hi @gcusello ,

Thanks for your reply.

Actually my search is not taking that much time, hardly it takes 4-6 minutes of time to complete the search.

But the problem here is the alert is triggering before the search complete, means after 2-3 minutes of the cronjob scheduled time. So only 30-40% of search completed within those alert triggering time and i'm getting alerts everyday. I need a solution that the alert will trigger only after the search complete.

So can you please help me what to do in this case?

Thanks in advance.

0 Karma

tomnguyen1
Explorer

This is what I am looking for

 

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @tomnguyen1 ,

usually the easiest way is to create a scheduled search (usually the same search) with e shorter time period that saves results in a summary index, and then run the alert on the summary index.

Then you should try to optimize your search.

let me know if we can help you describing in a more detailed way, your search.

Ciao.

Giuseppe

gcusello
SplunkTrust
SplunkTrust

Hi @andy11 ,

if your search has a run time of more than 24 hours probably there's an issue on it, even if 10 M of events aren't so many!

probably your system hasn't the required resources (CPUs and especially storage IOPS (at least 800) so your searches are too slow.

Anyway, you should apply the accelaration methods that Splunk offers to you, so please, read my answer to a similar question: https://community.splunk.com/t5/Splunk-Search/How-can-I-optimize-my-Splunk-queries-for-better-perfor...

In other words, you should use an accelerated Data Model or a summary index and run your alert search on it.

Ciao.

Giuseppe

Get Updates on the Splunk Community!

Splunk Observability Synthetic Monitoring - Resolved Incident on Detector Alerts

We’ve discovered a bug that affected the auto-clear of Synthetic Detectors in the Splunk Synthetic Monitoring ...

Video | Tom’s Smartness Journey Continues

Remember Splunk Community member Tom Kopchak? If you caught the first episode of our Smartness interview ...

3-2-1 Go! How Fast Can You Debug Microservices with Observability Cloud?

3-2-1 Go! How Fast Can You Debug Microservices with Observability Cloud? Learn how unique features like ...