Splunk Search

query the number between a range

deepakaakula
Explorer

Hi,

I have alerts when the number goes above certain % of the disk usage. So there are alerts at 70, 80, 90. It works fine. But when there is a 70% alert, I get alerted twice, because of 70% and also 60% usage.

Here is what the query looks like. I am trying to keep the alert segmented to query the number only between 60-69.99 and 70.00-79.99 and so on.

aws_account="cloud" "DSM: Current disk usage for account" (account_disk_quota > 70 )

 

Labels (2)
0 Karma
1 Solution

richgalloway
SplunkTrust
SplunkTrust

I don't understand why you are getting alerts about 60 when the alert clearly looks for values greater than 70.

Try this method for looking for values within a range.

aws_account="cloud" "DSM: Current disk usage for account" (account_disk_quota > 70 AND account_disk_quota < 80 )
---
If this reply helps you, Karma would be appreciated.

View solution in original post

richgalloway
SplunkTrust
SplunkTrust

I don't understand why you are getting alerts about 60 when the alert clearly looks for values greater than 70.

Try this method for looking for values within a range.

aws_account="cloud" "DSM: Current disk usage for account" (account_disk_quota > 70 AND account_disk_quota < 80 )
---
If this reply helps you, Karma would be appreciated.

deepakaakula
Explorer

hi @richgalloway 

I thought the alert seems to be working fine,  but today the disk usage hit 70%, but the alert has triggered twice. once for the 70% as expected, and also the 60% one. These are the queries I have right now.

Do you recommend any modifications?

 

60% threshold query:    "aws_account="cloud" "DSM: Current disk usage for account" (account_disk_quota > 60 AND account_disk_quota < 70 )"

 

70% query:   "aws_account="cloud" "DSM: Current disk usage for account" (account_disk_quota > 70 AND account_disk_quota < 80 )"

0 Karma

richgalloway
SplunkTrust
SplunkTrust
This seems normal to me. On the way to 70% usage, the disk would reach 60% usage, would it not?
---
If this reply helps you, Karma would be appreciated.
0 Karma

deepakaakula
Explorer

Right, but the disk was at 60% from last 2 weeks, and yesterday evening it reached 70%.
So every time there is an increase in the 70% range, I get alerted twice from 60% and 70% monitors.

0 Karma

richgalloway
SplunkTrust
SplunkTrust
I think I understand, but I don't have a suggestion. Sorry.
---
If this reply helps you, Karma would be appreciated.
0 Karma

deepakaakula
Explorer

Thanks Rich. I have 4 different alerts with same query for 60, 70, 80, 90%. I just mentioned one of it here.

So when 90% is triggered, I get alerted 4 times.

I tried the query you gave with and operation. It did not seems to work.

0 Karma

richgalloway
SplunkTrust
SplunkTrust
Please explain "it did not seems to work". Did it work or did it not? What results did you get? What did you expect to get?
---
If this reply helps you, Karma would be appreciated.
0 Karma

deepakaakula
Explorer

sorry, please ignore my last message. I was querying it for different profile, and I got 0 events back.

 

I checked with the correct profile, and it worked perfectly now.

 

Thank you.

0 Karma
Get Updates on the Splunk Community!

Splunk Answers Content Calendar, June Edition

Get ready for this week’s post dedicated to Splunk Dashboards! We're celebrating the power of community by ...

What You Read The Most: Splunk Lantern’s Most Popular Articles!

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...

See your relevant APM services, dashboards, and alerts in one place with the updated ...

As a Splunk Observability user, you have a lot of data you have to manage, prioritize, and troubleshoot on a ...