Splunk Search

How to categorize a particular values and do a percentage of it?

power12
Communicator

Hello Splunkers,

I have a field called state_sinfo which have values like (up,up*,up$,up^,continue,continue$,continued,continied$,down,down%,down#,drop,drop*,drop$)

I want to categorize certain values of state_sinfo as like below
available (up,up*,up$,up^,continue,continue$,continued,continied$)
not_available(down,down%,down#)
down(drop,drop*,drop$)

Then I want to calculate the sum  of all categories by time

Lastly I want to calculate the  percentage 
| eval "% available" = round( available / ( available + drop ) * 100 , 2)
| eval "% drained" = round( drop / (available + drop ) * 100 , 2)


Sample event

 

slu_ne_state{instance="192.1x.x.x.",job="exporters",node="xyz",partition="gryr",state_sinfo="down",state_sinfo_simple="maint"} 1.000000 1676402381347

Thanks In advance 

Labels (2)
0 Karma

bowesmana
SplunkTrust
SplunkTrust

Here's an example that has one event for each of your possible states. Note that in your 'drop' case, you give the category as 'down', but I assume that is supposed to be "drop".

| makeresults
| eval _raw="slu_ne_state{instance=\"192.1x.x.x.\",job=\"exporters\",node=\"xyz\",partition=\"gryr\",state_sinfo=\"down\",state_sinfo_simple=\"maint\"} 1.000000 1676402381347
slu_ne_state{instance=\"192.1x.x.x.\",job=\"exporters\",node=\"xyz\",partition=\"gryr\",state_sinfo=\"down#\",state_sinfo_simple=\"maint\"} 1.000000 1676402381347
slu_ne_state{instance=\"192.1x.x.x.\",job=\"exporters\",node=\"xyz\",partition=\"gryr\",state_sinfo=\"down%\",state_sinfo_simple=\"maint\"} 1.000000 1676402381348
slu_ne_state{instance=\"192.1x.x.x.\",job=\"exporters\",node=\"xyz\",partition=\"gryr\",state_sinfo=\"up\",state_sinfo_simple=\"maint\"} 1.000000 1676402381349
slu_ne_state{instance=\"192.1x.x.x.\",job=\"exporters\",node=\"xyz\",partition=\"gryr\",state_sinfo=\"up*\",state_sinfo_simple=\"maint\"} 1.000000 1676402381350
slu_ne_state{instance=\"192.1x.x.x.\",job=\"exporters\",node=\"xyz\",partition=\"gryr\",state_sinfo=\"up$\",state_sinfo_simple=\"maint\"} 1.000000 1676402381351
slu_ne_state{instance=\"192.1x.x.x.\",job=\"exporters\",node=\"xyz\",partition=\"gryr\",state_sinfo=\"up^\",state_sinfo_simple=\"maint\"} 1.000000 1676402381352
slu_ne_state{instance=\"192.1x.x.x.\",job=\"exporters\",node=\"xyz\",partition=\"gryr\",state_sinfo=\"continue\",state_sinfo_simple=\"maint\"} 1.000000 1676402381353
slu_ne_state{instance=\"192.1x.x.x.\",job=\"exporters\",node=\"xyz\",partition=\"gryr\",state_sinfo=\"continue$\",state_sinfo_simple=\"maint\"} 1.000000 1676402381354
slu_ne_state{instance=\"192.1x.x.x.\",job=\"exporters\",node=\"xyz\",partition=\"gryr\",state_sinfo=\"continued\",state_sinfo_simple=\"maint\"} 1.000000 1676402381355
slu_ne_state{instance=\"192.1x.x.x.\",job=\"exporters\",node=\"xyz\",partition=\"gryr\",state_sinfo=\"continied$\",state_sinfo_simple=\"maint\"} 1.000000 1676402381356
slu_ne_state{instance=\"192.1x.x.x.\",job=\"exporters\",node=\"xyz\",partition=\"gryr\",state_sinfo=\"drop\",state_sinfo_simple=\"maint\"} 1.000000 1676402381357
slu_ne_state{instance=\"192.1x.x.x.\",job=\"exporters\",node=\"xyz\",partition=\"gryr\",state_sinfo=\"drop*\",state_sinfo_simple=\"maint\"} 1.000000 1676402381358
slu_ne_state{instance=\"192.1x.x.x.\",job=\"exporters\",node=\"xyz\",partition=\"gryr\",state_sinfo=\"drop$\",state_sinfo_simple=\"maint\"} 1.000000 1676402381359"
| eval rows=split(replace(_raw, "\n", "##"), "##")
| mvexpand rows
| rename rows as _raw
``` Up to here is just setting up a data example ```
| rex "state_sinfo=\"(?<state_sinfo>[^\"]*)"
| eval category=case(match(state_sinfo, "up[\*\$\^]?|continue[\$d]?|continied\$"), "available",
                     match(state_sinfo, "down[%#]?"), "not_available",
                     match(state_sinfo, "drop[\*\$]?"), "drop")
| stats count by category
| transpose 0 header_field=category
| fields - column
| eval "% available" = round( available / ( available + drop ) * 100 , 2)
| eval "% drained" = round( drop / (available + drop ) * 100 , 2)

As you can see, this just sets up an example based on your data, then the rex will extract the state_sinfo field.

The eval/case statement does the categorisation and the transpose turns the data around, so you can do the final calcs.

Hope this helps

0 Karma
Get Updates on the Splunk Community!

New in Observability - Improvements to Custom Metrics SLOs, Log Observer Connect & ...

The latest enhancements to the Splunk observability portfolio deliver improved SLO management accuracy, better ...

Improve Data Pipelines Using Splunk Data Management

  Register Now   This Tech Talk will explore the pipeline management offerings Edge Processor and Ingest ...

3-2-1 Go! How Fast Can You Debug Microservices with Observability Cloud?

Register Join this Tech Talk to learn how unique features like Service Centric Views, Tag Spotlight, and ...