Splunk Search

Count by group subgroup and use subgroup as column

splunknewbieste
New Member

I had a query like this

....
| eval group_name = case ( 
    match ( field , "value1" ) , "g1" , 
    match ( field , "value2" ) , "g2" , 
    match ( field , "value2" ) , "g3") 
| spath status 
| stats count by group_name status 

which gives me result like below

group_name |  status  | count 
g1         | 200      | x
g1         | 400      | x
g2         | 200      | x
....

What I want instead is

group_name | 2xx | 3xx | 4xx | 5xx
g1         | x   | x   | x   | x
g2         | x   | x   | x   | x
....               

How can I change my query to achieve this?

Many thanks,

0 Karma
1 Solution

Runals
Motivator

replace stats with chart

... | chart count over group_name by status

I generally get the over and by pieces mixed up though so you might have to change that =). Alternatively you could use xyseries after your stats command but that isn't needed with the chart over by (unless you want to sort). Recall that when you use chart the field count doesn't exist if you add another piped command.

View solution in original post

Runals
Motivator

replace stats with chart

... | chart count over group_name by status

I generally get the over and by pieces mixed up though so you might have to change that =). Alternatively you could use xyseries after your stats command but that isn't needed with the chart over by (unless you want to sort). Recall that when you use chart the field count doesn't exist if you add another piped command.

splunknewbieste
New Member

@Runals just wonder if it's possible to mix the stats for event that related to group_name? For eg, what if I want 2 other columns in that table: (i) avg(durationMs) for each group (g1, g2, ..) and (ii) count for each group?

0 Karma

Runals
Motivator

Depending on the volume of data and other factors (ie lazy quotient) I might look at a join but only really if you are looking to get the avg duration per group and not per group and status. My guess is though what you might really want is the avg duration per status code though right? It might look something similar to this which is the avg and max memory usage by system over a series of weeks - assuming the table thing works ok

host                 | week of 10/04/15  | week of 10/18/15 | etc
system1              avg: 14.5 %                  avg: 21.0 %
                     max: 17.3 %                max: 22.5 %
system2              avg: 64.6%                  avg: 67.1%
                     max: 65.9%                max: 71.0%

If that is the case the query looks like this - at least for my example. You'd need to translate

sourcetype="Perfmon:Memory" counter="% Committed Bytes In Use" earliest=-4w@w latest=@w | bin span=1w _time | eval time = "Week of " .strftime(_time, "%m/%d/%y") | eval host=upper(host) | stats avg(Value) as avg max(Value) as max by host time | foreach avg max [eval <<FIELD>> = "<<FIELD>>: " .round(<<FIELD>>,1). " %"] | eval val = avg . "|" .max | makemv val delim="|" | xyseries host time val

The part to pick up on is at the stats command where I'm first getting a line per host and week with the avg and max values. The foreach bit adds the % sign instead of using 2 evals. Then I'm creating a new field to merge the avg and max values BUT with a delimiter to use to turn around and make it a multivalue field (so that the results show up on 2 lines). Then I'm using the xyseries command that I referenced above.

0 Karma

splunknewbieste
New Member

it works well for me. many thanks

0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...