we had a vendor setup our Splunk instance and configure a "Brute Force Attack" alert with the following query.
--- orginal brute force alert ----
| tstats summariesonly=t allow_old_summaries=t count from datamodel=Authentication by Authentication.action, Authentication.src
| rename Authentication.src as source, Authentication.action as action
| chart last(count) over source by action
| where success>0 and failure>20
| sort -failure
| rename failure as failures
| fields - success, unknown
This seemed to be working OK, but lately we've been getting a lot of emails from it. Most I've fixed, it was a bad password in a automated job. But the last one left on my list, the source is listed as "unknown" and I cant seem to find any more information about it.
I'm new to splunk so probably not looking in the correct place the correct way.
Has anyone got any suggestions on how to track down what it might be ?
The query uses a datamodel, which automatically inserts "unknown" in certain fields that have no value. There's nothing you can do about that other than change the source to provide a proper value.
Consider using the datamodel's constraints to fetch the raw events used to detect brute force attacks. Perhaps something there will provide a clue to the source.