Splunk Search

How do I edit my search to monitor license usage status for the current day?

Powers64
Explorer

Before I start, I found https://answers.splunk.com/answers/187080/how-to-create-a-search-to-predict-license-violatio.html
to be very helpful with predicting license usage by end of day. However, I am looking for current status of license usage for the current day. For an example, if license is 100GB and at 9AM the license usage should not exceed 37.5GB. This would be calculated based on time 900/2400 (Military time). I have created a search that takes in license used then max allowed usage for current time of day. The result should show "WARNING" if above max allowed or "GOOD" if under (or equal to). I am having some difficulty with the IF statement to work. I believe it sees the two variables as strings and not integers. I tried a few eval commands with no luck (or maybe didn't do it right). Can someone to assist me with this please?

| rest splunk_server=local /services/licenser/pools | rename title AS Pool | search [rest splunk_server=local /services/licenser/groups | search is_active=1 | eval stack_id=stack_ids | fields stack_id] | join type=outer stack_id [rest splunk_server=local /services/licenser/stacks | eval stack_id=title | eval stack_quota=quota | fields stack_id stack_quota] | stats sum(used_bytes) as used max(stack_quota) as total | eval usedGB=round(used/1024/1024/1024,4) | append [| stats count AS tnow | eval tnow = now() | eval timenow=strftime(tnow,"%H%M") | eval useMAX=((timenow/2400)*100)] | convert num(useMAX) as IntMax  | eval license_stats=if('usedGB' >= 'IntMax', "WARNING", "GOOD") | fields usedGB, license_stats, IntMax
0 Karma
1 Solution

somesoni2
Revered Legend

I think you're almost there. Instead of append, you should use appendcols (to add the subsearch result field IntMax as a column in main search so that you can use them in eval)

| rest splunk_server=local /services/licenser/pools | rename title AS Pool | search [rest splunk_server=local /services/licenser/groups | search is_active=1 | eval stack_id=stack_ids | fields stack_id] | join type=outer stack_id [rest splunk_server=local /services/licenser/stacks | eval stack_id=title | eval stack_quota=quota | fields stack_id stack_quota] | stats sum(used_bytes) as used max(stack_quota) as total | eval usedGB=round(used/1024/1024/1024,4)  | appendcols [| stats count AS tnow | eval tnow = now() | eval timenow=strftime(tnow,"%H%M") | eval useMAX=((timenow/2400)*100)] | convert num(useMAX) as IntMax  | eval license_stats=if('usedGB' >= 'IntMax', "WARNING", "GOOD") | fields usedGB, license_stats, IntMax

View solution in original post

somesoni2
Revered Legend

I think you're almost there. Instead of append, you should use appendcols (to add the subsearch result field IntMax as a column in main search so that you can use them in eval)

| rest splunk_server=local /services/licenser/pools | rename title AS Pool | search [rest splunk_server=local /services/licenser/groups | search is_active=1 | eval stack_id=stack_ids | fields stack_id] | join type=outer stack_id [rest splunk_server=local /services/licenser/stacks | eval stack_id=title | eval stack_quota=quota | fields stack_id stack_quota] | stats sum(used_bytes) as used max(stack_quota) as total | eval usedGB=round(used/1024/1024/1024,4)  | appendcols [| stats count AS tnow | eval tnow = now() | eval timenow=strftime(tnow,"%H%M") | eval useMAX=((timenow/2400)*100)] | convert num(useMAX) as IntMax  | eval license_stats=if('usedGB' >= 'IntMax', "WARNING", "GOOD") | fields usedGB, license_stats, IntMax

Powers64
Explorer

Awesome this worked, thank you!

0 Karma
Get Updates on the Splunk Community!

How to Monitor Google Kubernetes Engine (GKE)

We’ve looked at how to integrate Kubernetes environments with Splunk Observability Cloud, but what about ...

Index This | How can you make 45 using only 4?

October 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with this ...

Splunk Education Goes to Washington | Splunk GovSummit 2024

If you’re in the Washington, D.C. area, this is your opportunity to take your career and Splunk skills to the ...