Splunk Search

Using Machine Learning Toolkit to detect server latency

indeed_2000
Motivator

Hi

I have key value that call (duration) in my application log that show duration of each job done.

each day when I get maximum duration it has lot’s  of false positive because it is natural to become high duration in some point.

It’s not normal when it continues high duration.

e.g. 
normal condition:

00:01:00.000 WARNING duration[0.01]
00:01:00.000 WARNING duration[100.01]
00:01:00.000 WARNING duration[0.01]

 

abnormal condition:

00:01:00.000 WARNING duration[0.01]
00:01:00.000 WARNING duration[100.01]
00:01:00.000 WARNING duration[50.01]

00:01:00.000 WARNING duration[90.01]
00:01:00.000 WARNING duration[100.01]
00:01:00.000 WARNING duration[0.01]

 

1-how can I detect abnormal condition with splunk? (Best way with minimum false positive on hug data)

2-which visualization or chart more suitable to show this abnormal condition daily? this is huge log file and it is difficult to show all data for each day on single chart.

Any idea?

 Thanks,

Labels (4)
0 Karma
Get Updates on the Splunk Community!

Unlock Database Monitoring with Splunk Observability Cloud

  In today’s fast-paced digital landscape, even minor database slowdowns can disrupt user experiences and ...

Purpose in Action: How Splunk Is Helping Power an Inclusive Future for All

At Cisco, purpose isn’t a tagline—it’s a commitment. Cisco’s FY25 Purpose Report outlines how the company is ...

[Upcoming Webinar] Demo Day: Transforming IT Operations with Splunk

Join us for a live Demo Day at the Cisco Store on January 21st 10:00am - 11:00am PST In the fast-paced world ...