I'm using a query which returns entire day data :
index="index_name" source="source_name"
And this search provides me above 10 millions of huge events.
So my requirement is if the data gets reduced below 10m i should receive an alert.
But when this alert is triggering then this entire search is not getting completed because it's taking lots of time and before that only the alert triggering every time.
So is there any way that i can trigger this alert after the search completed completely.
Ok, but what is the goal of your alert? If you just want to know whether you have less than 10Mevents you chose the worst possible way to do so. Why fetching all events if you only want their count?
index=whatever source=something | stats count
Is much much better.
And if you use only indexed fields (or index name which technically isn't an indexed field but we can assume it is for the sake of this argument) which you do you can even do it lightning-fast as
| tstats count WHERE index=whatever source=something
Hi @gcusello ,
Thanks for your reply.
Actually my search is not taking that much time, hardly it takes 4-6 minutes of time to complete the search.
But the problem here is the alert is triggering before the search complete, means after 2-3 minutes of the cronjob scheduled time. So only 30-40% of search completed within those alert triggering time and i'm getting alerts everyday. I need a solution that the alert will trigger only after the search complete.
So can you please help me what to do in this case?
Thanks in advance.
This is what I am looking for
Hi @tomnguyen1 ,
usually the easiest way is to create a scheduled search (usually the same search) with e shorter time period that saves results in a summary index, and then run the alert on the summary index.
Then you should try to optimize your search.
let me know if we can help you describing in a more detailed way, your search.
Ciao.
Giuseppe
Hi @andy11 ,
if your search has a run time of more than 24 hours probably there's an issue on it, even if 10 M of events aren't so many!
probably your system hasn't the required resources (CPUs and especially storage IOPS (at least 800) so your searches are too slow.
Anyway, you should apply the accelaration methods that Splunk offers to you, so please, read my answer to a similar question: https://community.splunk.com/t5/Splunk-Search/How-can-I-optimize-my-Splunk-queries-for-better-perfor...
In other words, you should use an accelerated Data Model or a summary index and run your alert search on it.
Ciao.
Giuseppe