Besides the obvious things of looking for rare field values...
what are all the list of anomaly searches you use to find
unexpected events?
How do you limit false positives, or uninteresting anomalies?
the more specific and the more searches you can list out, the better.
.
Feedback from our Splunk customers showed the top anomalies they were looking for were:
These can be satisfied by running:
| prelertautodetect metric_value by metric_name
| prelertautodetect count
| prelertautodetect count by field_name
| prelertautodetect rare by field_name
| prelertautodetect metric_value over field_value
| prelertautodetect count over field_value
Some specific customer examples are shown here http://support.prelert.com/customer/portal/articles/1355584-examples-overview
To limit false positives, we've found that it is key to apply accurate statistical models to these data. In particular, modelling the tails of probability distributions accurately is key to reducing false positives. In addition, automatically modelling the periodic and seasonal components means that you can model the residuals, which again improves accuracy.
Finally, we've found that normalising the results allows the signal to noise ratio to be controlled, providing an accurate ranking of results in highly anomalous environments.
Happy to provide more customer examples as required.
blah blah | eval p = punct | stats c(p) as cpu values(_raw) by punct | sort cpu
will show the full _raw for uncommon punct
. A bit like rare
for the whole event so-to-speak.
/k