Splunk Search

Why are we getting excessive number of alerts?

Motivator

We have an All time (real time) alert which produced 315 alerts in the first eight hours of the day.

When running the search query of the alert for these eight hours, we get six events.

The alert itself is as simple as it gets -

index=<index name>
AND (category="Web Attack"
NOT src IN (<set of IPs>)
)

| table <set of fields>

What's going on here?

Tags (1)
0 Karma
1 Solution

Champion

we perhaps need 1-2 more iterations, but I believe we are making progress 🙂
indexearliest=-15m indexlatest=now index=your index | rest of the stuff...

Now, this should calculate only events that were indexed from 15 mins ago till now...bit closer?

View solution in original post

0 Karma

Champion

we perhaps need 1-2 more iterations, but I believe we are making progress 🙂
indexearliest=-15m indexlatest=now index=your index | rest of the stuff...

Now, this should calculate only events that were indexed from 15 mins ago till now...bit closer?

View solution in original post

0 Karma

Motivator

I think that's it - love it @Sukisen1981 !!!

0 Karma

Champion

not an issue at all .would still be interesting to see if the default_backfill=false works though 🙂 🙂

0 Karma

Motivator

haha - funny

0 Karma

Motivator

@Sukisen1981 - please convert to an answer...

0 Karma

Champion

duuno which one to do , but I will convert the last comment into an answer.
I rarely get chance to fiddle around with the backend (.conf files) as it is maintained by a different vendor...this default_backfill=false looks interesting....maybe I will play around it with in my local

0 Karma

Motivator

_index_earliest=-15m _index_latest=now index=your index | rest of the stuff works so far as a charm ; -)

0 Karma

Champion

glad to know 🙂

I did notice that you had posted a question on the 'real ' real time alert issue, any good clues on that thread?.
Unfortunately I got very busy in office work (on which alas, i am dependent for my B&B) and could not catch hold of the admin team to tinker with default_backfill..which i have filed in my mind and will get to it one day - the gods, winds and time permitting 🙂 🙂

0 Karma

Motivator

Still trying to figure out this real time alert issue ; -)

0 Karma

Champion

hi @danielbb - Can you please post the alert configuration, particularly interested in the real time look back wondow

0 Karma

Motivator

Is this the right view @Sukisen1981?

alt text

0 Karma

Champion

hi @danielbb - see this, https://docs.splunk.com/Documentation/Splunk/7.3.1/Search/Specifyrealtimewindowsinyoursearch
and this
https://docs.splunk.com/Documentation/Splunk/7.3.1/Search/Specifyrealtimewindowsinyoursearch

Try setting the default_backfill to false and see?

[realtime]

default_backfill =
* Specifies if windowed real-time searches should backfill events
* Defaults to true

Motivator

The doc says - For windowed real-time searches, you can backfill, but we don't use windowed real-time searches.

From the UI, the only relevant option seems to be the Expires at 10 hours. Can it have anything to do with us?

Btw, where can we set "windowed" real time searches versus "all-time" real-time searches?

alt text

0 Karma

Champion

Hi @danielbb
May I ask why you need a real time alert in the first place? As a thumb rule, it is better to avoid a real time alert.
Going by the frequency of the hits you mentioned earler (6 events in 8 hrs) can you not make it a scheduled alert running say every hour / hourly frequency or even on a 3 mins scheduled window?

Motivator

Ok, makes perfect sense, however these events have indexing delay that we can't avoid. For these 6 events the delay varies between 1.7 and 12.32 minutes.

So, is there a way to schedule these "regular" alerts based on _indextime. Meaning, we'll have the alert fires for all events that got indexed in the past 15 minutes, for example.

0 Karma

Champion

interesting, try this in search
index=yourindex|your search
| eval indextime=strftime(indextime,"%Y-%m-%d %H:%M:%S") | table indextime ,time
| eval time=strptime(indextime,"%Y-%m-%d %H:%M:%S")
| eval time=time
| stats count by indextime,
time
Is there a 'proper' capture based on indextime or _time

0 Karma

Motivator

It shows -

alt text

0 Karma

Champion

check the statistics tab carefully...any difference in minutes between indextime and _time in the table?

0 Karma

Motivator

Not on the first page, but we have lags for some of the events.

0 Karma

Champion

ok one last test and sorry, I should have asked you before you said there are only 6 events in the last eight hours...so if you use your search criteria before the evals..you should
so just add these 2 evals before your table
| eval indextime=strftime(indextime,"%Y-%m-%d %H:%M:%S") | table indextime ,time
| eval time=strptime(indextime,"%Y-%m-%d %H:%M:%S")
in the table fields add indextime and _time along with the rest.
What i am asking is now, we should have just 6 events and in these 6 events is there a difference between indextime and _time , matching what you have describec - 1-7 ~ 12 mins delay?

0 Karma