Splunk Search

How is the unexpectedness score of the anomalies command calculated?

Path Finder

How does the unexpectedness score actually get computed? How does the anolamies command play out if I have n events? http://www.splunk.com/base/Documentation/latest/SearchReference/Anomalies

1 Solution

Splunk Employee
Splunk Employee

The algorithm is proprietary, but roughly speaking, the unexpectedness of an event X coming after a set of previous events P is estimated as:

 u(X | P) =  ( s(P and X) - s(P) ) /  ( s(P) + s(X) )

where s() is a metric of how similar or uniform the data is. The above formula tends to be less noisy on real data than others tried, as we just want a measure of how much adding X affects similarity, but need to normalize for differing event sizes.

The size of the sliding window of previous events P, is determined by a 'maxvalues' argument, which defaults to 100 events. By default, the raw text (_raw) of the events are used, but any other field can be used with the 'field' argument. By default, it removes those items that have an unexpectedness greater than the "threshold" arguments, which defaults to 0.01; if the 'labelonly' argument is set to true, it only annotates the events with an unexpectedness score, rather than removing the "boring" events.

You can run anomalies after anomalies, to further narrow down the results. As each run operates over 100 events, the second call to anomalies is approximating running over a window of 10,000 previous events.

Finally, nothing beats domain knowledge. If you know what you are looking for, it might make sense to write your own search command to find your anomalies.

View solution in original post

Splunk Employee
Splunk Employee

The algorithm is proprietary, but roughly speaking, the unexpectedness of an event X coming after a set of previous events P is estimated as:

 u(X | P) =  ( s(P and X) - s(P) ) /  ( s(P) + s(X) )

where s() is a metric of how similar or uniform the data is. The above formula tends to be less noisy on real data than others tried, as we just want a measure of how much adding X affects similarity, but need to normalize for differing event sizes.

The size of the sliding window of previous events P, is determined by a 'maxvalues' argument, which defaults to 100 events. By default, the raw text (_raw) of the events are used, but any other field can be used with the 'field' argument. By default, it removes those items that have an unexpectedness greater than the "threshold" arguments, which defaults to 0.01; if the 'labelonly' argument is set to true, it only annotates the events with an unexpectedness score, rather than removing the "boring" events.

You can run anomalies after anomalies, to further narrow down the results. As each run operates over 100 events, the second call to anomalies is approximating running over a window of 10,000 previous events.

Finally, nothing beats domain knowledge. If you know what you are looking for, it might make sense to write your own search command to find your anomalies.

View solution in original post

State of Splunk Careers

Access the Splunk Careers Report to see real data that shows how Splunk mastery increases your value and job satisfaction.

Find out what your skills are worth!