Hello all,
I'm trying to create an alert for Successful Brute Force Attempts using the Authentication Data Model. Currently, I'm doing this:
| tstats summariesonly=true count as success FROM datamodel=Authentication where Authentication.user!="*$*" AND Authentication.action="success" BY _time span=60m sourcetype Authentication.src Authentication.user
| join max=0 _time, sourcetype, Authentication.src, Authentication.user
[| tstats summariesonly=true count as failure FROM datamodel=Authentication where Authentication.user!="*$*" AND Authentication.action="failure" BY _time span=60m sourcetype Authentication.src Authentication.user]
| search success>0 AND failure>0
| where success<(failure*.05)
| rename Authentication.* as *
| lookup dnslookup clientip as src OUTPUT clienthost as src_host
| table _time, user, src, src_host, sourcetype, failure, success
It aggregates the successful and failed logins by each user for each src by sourcetype by hour. Then it returns the info when a user has failed to authenticate to a specific sourcetype from a specific src at least 95% of the time within the hour, but not 100% (the user tried to login a bunch of times, most of their login attempts failed, but at least one of the login attempts succeeded).
What everyone else would like me to do is look for the traditional pattern for finding Successful Brute Force Attempts - many failed logins followed by a successful one.
I can't seem to find an efficient way of doing that with the Splunk Authentication Data Model. We get a over a million authentication attempts per day and just under 100,000 of those are failures.
We can't afford Splunk Enterprise Security (Higher Ed) and my understanding is that this is more or less how ES Successful Brute Force Detection works anyways.
Do any of you have any advice on finding a pattern of a large number of something (failures, whatever) followed by something else (success, etc.)?