You may find that ML is overkill for this particular use-case. Consider Apache web logs, for example, which can be configured to include the RequestTimeSeconds, which is the time taken to process a request. You could then create an alert with something like the following: index=weblogs earliest=-30m@m | eventstats count, avg(RequestTimeSeconds) as avg_rts, stdev(RequestTimeSeconds) as stdev_rts by url | where RequestTimeSeconds>(2*stdev_avg+avg_rts) AND count>10 This will give you a list of URLs that have been accessed more than 10 times, and have occurrences where the time to respond has been over 2 standard deviations above the average (per each URL). You can extend this pattern to looking at SQL logs, authentication logs, etc... You can make a longer time window to develop baselines for, keep track on a daily/weekly/monthly basis, make the limits more than 2 standard deviations above the normal, require more than 10, aggregate based on source/client, etc... You will need to play around with these values to determine values that aren't too noisy, yet detect what you are looking for.
... View more