I have been trying to figure out a search that can be used to track failed logon events over time but really struggling to identify a workable solution (if there is one).
My initial search query was
index=wineventlog EventCode=4625 NOT TargetUserName="*$"
| eval User=TargetDomainName."/".TargetUserName
| transaction User EventCode maxspan=1d
| stats values(User) by signature
Reading some other threads indicated that the use of 'transaction' isn't very efficient and to use streamstats or eventstats instead so I came up with
index=wineventlog EventCode=4625 NOT TargetUserName="*$"
| eval User=TargetDomainName."/".TargetUserName
| eventstats sum(User) as Failed_Count by signature
| where Failed_Count >=3
| table User signature Failed_Count
however this doesn't give me any results.
My aim is to search over a 7 day period and shows stats per day for each user by the signature. This would help with identifying bad scripts or possible bruteforce attempts including spray attacks over a long period.
You can try this
index=wineventlog EventCode=4625 NOT TargetUserName="*$"
| eval User=TargetDomainName."/".TargetUserName
| bin _time span=1d
| stats count as Failed_Count by _time User signature
which will give you a table of failure counts by day, user and signature. After that you can do what you want to aggregate or filter based on that data. eventstats can be expensive if you have lots of data, so it's always best to aggregate where possible.
It's not totally clear what the output you want is, i.e. do you want to filter failures > =3 of an individual signature over the whole week or failures per user/signature over a day. However, once you have the above table, you can then do more with that, e.g.
| eventstats sum(Failed_Count) as SignatureFailures by signature
| eventstats sum(Failed_Count) as UserFailures by User
this would then sum the failures by signature and user over the time range. You could also add in _time as the by clause to get the daily numbers too.
Note that doing eventstats after the stats will have reduced data volumes due to the aggregation already done in stats.
Hope this helps and gives you more to work with.
You can try this
index=wineventlog EventCode=4625 NOT TargetUserName="*$"
| eval User=TargetDomainName."/".TargetUserName
| bin _time span=1d
| stats count as Failed_Count by _time User signature
which will give you a table of failure counts by day, user and signature. After that you can do what you want to aggregate or filter based on that data. eventstats can be expensive if you have lots of data, so it's always best to aggregate where possible.
It's not totally clear what the output you want is, i.e. do you want to filter failures > =3 of an individual signature over the whole week or failures per user/signature over a day. However, once you have the above table, you can then do more with that, e.g.
| eventstats sum(Failed_Count) as SignatureFailures by signature
| eventstats sum(Failed_Count) as UserFailures by User
this would then sum the failures by signature and user over the time range. You could also add in _time as the by clause to get the daily numbers too.
Note that doing eventstats after the stats will have reduced data volumes due to the aggregation already done in stats.
Hope this helps and gives you more to work with.
Thanks for the ideas, I think I am trying to do too much at once, my intention was to map out, possibly look at averaging over time, failed logon attempts by the user and signature per day but that may be biting off more than I (or splunk) can chew in a manner to produce usable results.
I started to separate out the data by the signature field instead, producing several different groups of data/reports that seems to work a bit easier.
index=wineventlog EventCode=4625 NOT TargetUserName="*$" signature="Account is currently disabled"
| bucket _time span=1d
| eval User=TargetDomainName."/".TargetUserName
| stats count by User src _time
| timechart sum(count) as count by User