Security

Detect successful bruteforce attack....(successful login followed by failed multiple login )

Nishit9
New Member

I have created following query as per my database but it is indicating only all events during that span. Not generate alert after successful login.

index=* (EventCode=4624 OR EventCode=4625) 
| bin _time span=5m as minute 
| stats list(Keywords) as Attempts, count(eval(match(Keywords,"Audit Failure"))) as Failed,
     count(eval(match(Keywords,"Audit Success"))) as Success by minute Account_Name 
| where mvcount(Attempts)>=10 AND Success=1 AND Failed>=2 
| eval minute=strftime(minute,"%H:%M")
0 Karma
1 Solution

lguinn2
Legend

I suggest this revision:

 index=* (EventCode=4624 OR EventCode=4625) 
 | bin _time span=5m as minute 
 | stats count(Keywords) as Attempts, count(eval(match(Keywords,"Audit Failure"))) as Failed,
      count(eval(match(Keywords,"Audit Success"))) as Success by minute Account_Name 
 | where Attempts>=10 AND Success>0 AND Failed>=2 
 | eval minute=strftime(minute,"%H:%M")

You will need to create a trigger for the alert on "number of results > 0"

Also, if you are setting this up as an alert, how often are you running this search and over what timerange?

View solution in original post

elmiguelo123
Engager

index=* (EventCode=4624 OR EventCode=4625) | stats count as Attemps, count(eval(EventCode=4625)) as Failed, count(eval(EventCode=4624)) as Success by Account_Name | where NOT like (Account_Name,"%$") AND Failed>1 AND Succes>0|sort by Failed desc |,May be like this

index=* (EventCode=4624 OR EventCode=4625) | stats count as Attemps, count(eval(EventCode=4625)) as Failed, count(eval(EventCode=4624)) as Success by Account_Name | where NOT like (Account_Name,"%$") AND Failed>1 AND Succes>0|sort by Failed desc |

0 Karma

lguinn2
Legend

I suggest this revision:

 index=* (EventCode=4624 OR EventCode=4625) 
 | bin _time span=5m as minute 
 | stats count(Keywords) as Attempts, count(eval(match(Keywords,"Audit Failure"))) as Failed,
      count(eval(match(Keywords,"Audit Success"))) as Success by minute Account_Name 
 | where Attempts>=10 AND Success>0 AND Failed>=2 
 | eval minute=strftime(minute,"%H:%M")

You will need to create a trigger for the alert on "number of results > 0"

Also, if you are setting this up as an alert, how often are you running this search and over what timerange?

View solution in original post

Rshoufi
Explorer

@lguinn Thank you for a straightforward answer as I myself am trying to create an alert of some sort as requested of me of a customer that cannot run Splunk ES in their environment so lotta custom search queries to try and get this alert right.

Now my issue is with the search above, the Account_Name field returns system generated login attempts from DC users. I tried to exclude those accounts from showing up by trying Account_Name!="*$" as to exclude any account names that ended in a dollar sign (domain users in this case.) I only want the local user accounts to show up so then I can create an alert is the end goal and not have a ton of the noise that comes along with it from domain accounts.
The ways I'm currently trying just break the query and error out until i remove my addition. Any and all help is greatly appreciated.

0 Karma

Nishit9
New Member

It works. But I replaced the place of success & failed. Because otherwise it will check first successful login and then looking for failed login. So,
| where Attempts>=10 AND Failed>=2 AND Success>0

And,
I have selected timerange is one day & it would be work in real time.

0 Karma

lguinn2
Legend

Your search actually tests for this:

Within a 5-minute span, if a single user has
- at least 10 login attempts (both successful and failed)
- and at least 2 login failures
- and at least 1 login success
then report it.

The where command does not supply any ordering logic.

0 Karma

lguinn2
Legend

This is a pretty expensive search in real-time. Are you sure you really need real-time? You could set it up like this and it would be much more efficient:

  • schedule the search to run once per minute
  • have the search run over the last 5 minutes

Change the search to the following:

index=* (EventCode=4624 OR EventCode=4625)
| stats count(Keywords) as Attempts, count(eval(match(Keywords,"Audit Failure"))) as Failed,
count(eval(match(Keywords,"Audit Success"))) as Success
earliest(_time) as FirstAttempt latest(_time) as LatestAttempt by Account_Name
| where Attempts>=10 AND Success>0 AND Failed>=2
| eval FirstAttempt=strftime(FirstAttempt,"%x %X")
| eval LatestAttempt=strftime(LatestAttempt,"%x %X")

0 Karma

printul77700
Explorer

hi,why do you say is expensive to run in real time, splunk mentions real time searches are lower in priority than continuous and they can be skipped while never being able to see the interval you skipped.So if you want your search to not stress the system you would choose real time instead of continuous and because you don’t want to loose events you would look longer back in time every time you run it amd hope that your run will run at least one time to catch those events.If instead you choose continuous search will not be skipped but run anyway, the question is what would happen if it just can not run because of performance issues, would it anyway cover the right time span?
I am not expert in searches but if splunk says real time put less pressure than continuous and he wants to use real time and not continuous how would he write his search?Also what happnes with your search if it fails to run , it will just posibly loose some evemts?
for sur that rule as it was originally asked will miss events if they don’t fall in the 5 mins span ,even if there are 10 and 2 failed inside 5 minutes interval, but not im the same interval defined by time span=5.Looking forward to hear an explanation .Thank you

0 Karma

richgalloway
SplunkTrust
SplunkTrust

Splunk uses "real-time search" in two contexts - Core Splunk and Splunk Enterprise Security (ES). This question refers to Core Splunk for which the answer is correct. The documentation you cite pertains to ES where real-time searches are more efficient than continuous searches.

---
If this reply helps you, an upvote would be appreciated.
0 Karma

printul77700
Explorer

but even so, the documentation here is one of the worst, I would almoast say contradictory: 1.prioritize current data VS data completion, what would one be supposed to understand from such abstract terms
2.As excessive failed logins matter most when you hear about them quickly VS If you care more about identifying all excessive failed logins in your environment -again what is the differnce between this two statememts?
might be because I am not native english speaker,but I am not so sure is my fault

Configure a schedule for the correlation search
Correlation searches can run with a real-time or continuous schedule.
• Use a real-time schedule to prioritize current data and performance. Searches with a real-time schedule are skipped if the search cannot be
11
run at the scheduled time. Searches with a real-time schedule do not
backfill gaps in data that occur if the search is skipped.
• Use a continuous schedule to prioritize data completion, as searches with
a continuous schedule are never skipped.
As excessive failed logins matter most when you hear about them quickly, select a real-time schedule for the search. If you care more about identifying all excessive failed logins in your environment, you can select a continuous schedule for the search instead.

0 Karma

printul77700
Explorer

thank you, I am asking exactly because I am trying to get a good solutions for some CS I try to build while looking to put as less stress on the system and accepting some possible delay because the app or use cases are not critical,just important

0 Karma

sarwshai
Communicator

It worked for me, however i want to remove the exchange servers that ends with "$" from the results, can this be done?

0 Karma

Nishit9
New Member

There is little bit confusion so can you correct me if i am wrong.
It means I have to select time range is preset=1 minute window. And in alert select run on cron schedule
Earliest= -5m@m
Latest=@m
cron expression=/5***

0 Karma

lguinn2
Legend

No - my suggestion is that you don't use a real-time search: A preset of a 1-minute window is a real-time search.

But in a way it doesn't matter, because when you schedule the search, the settings for earliest and latest determine the search time range. These will override anything that you selected in the time range drop-down.

0 Karma
.conf21 CFS Extended through 5/20!

Don't miss your chance
to share your Splunk
wisdom in-person or
virtually at .conf21!

Call for Speakers has
been extended through
Thursday, 5/20!