Splunk Search

Manual Search and Alert providing different number of events searched/output as result

CrossWordKnower
Explorer

Hi Splunkers! The issue I am having is regarding different results from alerts when some condition is met, compared to manual search results on the same query and time frame. I am having a repeated issue between different search queries including different functions, where an alert is triggered, and when i view the results of the alert, it outputs for example 3000 events scanned, and 2 results in the statistic section. While when i manually trigger this search it will output 3500 events scanned and 0 results in the statistic scan. I cant find any solution online, and this issue is causing several of my alerts to false alert.

here is an example query that is giving me this issue incase that is helpful:

index="index" <search> earliest=-8h@h

|stats count(Field) as Counter earliest(Field) as DataOld by FieldA, Field B

|where DataNew!=DataOld OR isnull(DataOld)
|table Counter, DataOld, Field A, Field B


any help is very appericated!

Labels (1)
Tags (2)
0 Karma

VatsalJagani
SplunkTrust
SplunkTrust

@CrossWordKnower- When you say earliest=-8h@h , latest becomes now as you are not providing it. So number of results differ even when you run the search again manually, because your search will search new events coming in every time.

 

Try using static values of earliest & latest, for example earliest=01/22/2025:00:00:00 latest=01/23/2025:00:00:00

And in this scenario it should gave exactly the same count, regardless of its manual search or alert or when ever you search the search,

 

I hope this is understandable. Kindly upvote if it helps!!!

CrossWordKnower
Explorer

also forgot to type this in my example search, but most of my queries for these alerts use Latest=@h, keeping the window the same

 

0 Karma

isoutamo
SplunkTrust
SplunkTrust

Hi

it’s like @VatsalJagani said, when you are not set exact end time for your search, but you have earliest then splunk put latest=now. You could look those earliest and latest values which your alerts are used from _audit index. But I suppose that even then you will not get always exactly the same results! Why this happens? When you are ingesting data there are always some delays, it could be less than second or several minutes or even longer time depending on your environment, log sources and how those are integrated into Splunk. For that reason you should always use suitable earliest and latest values with suitable buffers on every alerts! And if there are some inputs where the latency could have too big variation, then you probably need to create two series of alerts for it. One which are trying to look it as well as possible for online/real time and second one which are taking care of those later coming events which haven’t realized by this real time alert.

r. Ismo

CrossWordKnower
Explorer

Hi! thanks for the response, like you predicted, the time frame is no where I am facing issue with my search, so it must be something to do with latency like you said. Is there any ways to change how the search is run? and by two alerts, do you mean running different timed alerts, or separate queries?

0 Karma

isoutamo
SplunkTrust
SplunkTrust

You could set in alert e.g.

..... earliest=-1h@m-5m latest=@m-5m

And run  this alert once in hour.

Just run it as separate job and update those earliest + latest and use e.g. 6h span and run it 4 times per day. Of course this depends on your alerts and needs.

CrossWordKnower
Explorer

Makes sense, thanks!

 

0 Karma
Get Updates on the Splunk Community!

.conf25 Community Recap

Hello Splunkers, And just like that, .conf25 is in the books! What an incredible few days — full of learning, ...

Splunk App Developers | .conf25 Recap & What’s Next

If you stopped by the Builder Bar at .conf25 this year, thank you! The retro tech beer garden vibes were ...

Congratulations to the 2025-2026 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...