Splunk Search

Calculate where criteria with value from a subsearch

jheiselman
Explorer

I'm sure this has been asked before, but nothing I'm coming up with for searches against this forum have proved useful.

I want to check for Windows hosts where the number of Context Switches/sec is higher than a calculated amount. That calculation needs to take into account the number of processors on the system.

To get the number of processors, I found that I can run the following search:
index="perfmon" sourcetype="Perfmon:CPU" instance!="_Total" | stats dc(instance) AS NumProcessors by host

To get the number of Context Switches/sec, it's as easy as:
index="perfmon" sourcetype="Perfmon:System" counter="Context Switches/sec"

And I want to limit the events in the context switches query to where Value = 5000 * NumProcessors. I thought a subsearch might be the way, but I can't seem to get that to work. This is something like what I want, but it doesn't work because the subsearch usage is wrong.

index="perfmon" sourcetype="Perfmon:System" counter="Context Switches/sec"
| stats avg(Value) AS avg_cs by host
| where avg_cs > (5000 * [search index="perfmon" host=$host$ sourcetype="Perfmon:CPU" instance!="_Total" | stats dc(instance) AS NumProcessors by host])

Labels (1)
0 Karma
1 Solution

jheiselman
Explorer

Got it to work for me.

index="perfmon" sourcetype="Perfmon:System" counter="Context Switches/sec"
| stats avg(Value) AS avg_cs by host
| sort host
| appendcols
  [ search index="perfmon" sourcetype="Perfmon:CPU" instance!="_Total"
    | stats dc(instance) AS NumProcessors by host
    | sort host]
| eval threshold=(5000*NumProcessors)
| where avg_cs>=threshold
| rename avg_cs AS "Context Switches/sec"
| fields host, "Context Switches/sec", threshold

View solution in original post

0 Karma

jheiselman
Explorer

Got it to work for me.

index="perfmon" sourcetype="Perfmon:System" counter="Context Switches/sec"
| stats avg(Value) AS avg_cs by host
| sort host
| appendcols
  [ search index="perfmon" sourcetype="Perfmon:CPU" instance!="_Total"
    | stats dc(instance) AS NumProcessors by host
    | sort host]
| eval threshold=(5000*NumProcessors)
| where avg_cs>=threshold
| rename avg_cs AS "Context Switches/sec"
| fields host, "Context Switches/sec", threshold

0 Karma

bowesmana
SplunkTrust
SplunkTrust

When using appendcols, you need to be 100% certain that the number of rows in your outer search and the number of rows in the subsearch are the same, otherwise the columns will not necessarily line up with the correct, in your case, host.

It is sometimes better to use append, as below, which means less sorting and no issue with missing host data

index="perfmon" sourcetype="Perfmon:System" counter="Context Switches/sec"
| stats avg(Value) AS avg_cs by host
| append
  [ search index="perfmon" sourcetype="Perfmon:CPU" instance!="_Total"
    | stats dc(instance) AS NumProcessors by host 
    | eval threshold=(5000*NumProcessors) ]
| stats values(*) as * by host
| where !isnull(threshold) AND avg_cs>=threshold
| rename avg_cs AS "Context Switches/sec"
| fields host, "Context Switches/sec", threshold
| sort host

 This basically creates data set 1 and data set 2, with the common host field.

The stats values(*) as * by host, collapses and joins the columns for each host. Where there is no threshold, if for example, the subsearch does not have data for the search, the isnull() check handles that case.

 

0 Karma

jheiselman
Explorer

@bowesmana wrote:

When using appendcols, you need to be 100% certain that the number of rows in your outer search and the number of rows in the subsearch are the same, otherwise the columns will not necessarily line up with the correct, in your case, host.

Understood. Part of what I've excluded is a couple of lookups done to ensure that the host list is always the same and in the same order.

0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...