Splunk Search

Kubernetes logs spamming Splunk

gschwel
New Member

We are having issues with Kubernetes containers spamming Splunk with 100's of gb's of logs sometimes. We would like to put together a search to track containers that have a sudden log spike and generate an alert. More specifically 1) look at the average rate of events 2) find the peak 3)decide a percentage of that peak 4) and then trigger an alert when a container has breached the threshold.

The closest I have come up with is the below search, which has an average rate and standard deviation of that rate by hour

 

index="apps" sourcetype="kube"
| bucket _time span=1h
| stats count as CountByHour by _time, kubernetes.container_name
| eventstats avg(CountByHour) as AvgByKCN stdev(CountByHour) as StDevByKCN by kubernetes.container_name

 

Labels (1)
0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.

Can’t make it to .conf25? Join us online!

Get Updates on the Splunk Community!

Community Content Calendar, September edition

Welcome to another insightful post from our Community Content Calendar! We're thrilled to continue bringing ...

Splunkbase Unveils New App Listing Management Public Preview

Splunkbase Unveils New App Listing Management Public PreviewWe're thrilled to announce the public preview of ...

Leveraging Automated Threat Analysis Across the Splunk Ecosystem

Are you leveraging automation to its fullest potential in your threat detection strategy?Our upcoming Security ...