Hey All,
Recently, while browsing through Splunk’s official research site, I came across a SPL (Search Processing Language) query that didn’t seem to work as expected:
https://research.splunk.com/endpoint/d82d4af4-a0bd-11ec-9445-3e22fbd008af/?query=kerbero
What’s the official way to report issues like this? Whether it’s a broken query, outdated syntax, or something that doesn’t align with current best practices, having a clear feedback channel would be incredibly helpful.
Regards,
RP
This looks like its pulled from the following configuration: https://github.com/splunk/security_content/blob/748a002dd000849f2749ec410ade88d7f6387215/detections/...
The best way to report this would probably be to raise an issue on the GitHub repo at https://github.com/splunk/security_content/issues
If you're comfortable with Github you could fork the repo, fix the SPL and then raise a Pull Request back to the main repo.
🌟 Did this answer help you? If so, please consider:
Your feedback encourages the volunteers in this community to continue contributing
BTW, why does this search not "work as expected"? It's not a challenge, just a question. One thing which might or might not be a syntactic mistake is a lower case "and" in the isOutlier condition - I don't remember if it has to be upper case in this context as well.
Oh, and you might want to discuss it on #security-research channel on Slack.
Hey this logic has few issues - (of course this is just my opinion)
I made a test using kerbrute
below you can see results using SPL from documentation
enumeration took a while and we see events for 1h
1st
I removed values(TargetUserName) as tried_accounts from stats commands becouse keep all of it doesn't make sense (64k in 2min)
I recommend use and here defined how many examples we want to see
| eval tried_accounts =mvindex(tried_accounts , 0, 9)
2nd
isOutlier - wouldn't trigger when we will run scan and test massive number of users and generally environment are pretty quiet like on upper screens
the reason is in this logic
| eval upperBound=(comp_avg+comp_std*3)
scanner generally us similar number tests per minute (in my case it was 35k average and standard deviation was equal 3,3k)
based on that we scan average 70k per 2min and standard deviation I've got almost 10k
in results to get Outlier I should see 100k+ account within 2min but scanner never crossed 80k
It depends on the time range you're running it over. (and yes, it's not a fastest search).
It relies on the fact that you'd have a high relatively short-lived spike. Assuming normal distribution (which is a typical assumption for real-life data) 3-sigma range should contain 99,7% of normally distributed data. Hence if you have a short spike, it should not affect your std dev too much so the scan should stand out. But if you search over a time range of which anomalous scan numbers are a significant part it will significantly affect the std dev so your bounds might get way too broad.
So it does make sense but might need tweaking. If you don't mind dealing with FPs you might want to lower the bound limits - switch to 2-sigma.
This looks like its pulled from the following configuration: https://github.com/splunk/security_content/blob/748a002dd000849f2749ec410ade88d7f6387215/detections/...
The best way to report this would probably be to raise an issue on the GitHub repo at https://github.com/splunk/security_content/issues
If you're comfortable with Github you could fork the repo, fix the SPL and then raise a Pull Request back to the main repo.
🌟 Did this answer help you? If so, please consider:
Your feedback encourages the volunteers in this community to continue contributing