Splunk Search

How to categorize a particular values and do a percentage of it?

power12
Communicator

Hello Splunkers,

I have a field called state_sinfo which have values like (up,up*,up$,up^,continue,continue$,continued,continied$,down,down%,down#,drop,drop*,drop$)

I want to categorize certain values of state_sinfo as like below
available (up,up*,up$,up^,continue,continue$,continued,continied$)
not_available(down,down%,down#)
down(drop,drop*,drop$)

Then I want to calculate the sum  of all categories by time

Lastly I want to calculate the  percentage 
| eval "% available" = round( available / ( available + drop ) * 100 , 2)
| eval "% drained" = round( drop / (available + drop ) * 100 , 2)


Sample event

 

slu_ne_state{instance="192.1x.x.x.",job="exporters",node="xyz",partition="gryr",state_sinfo="down",state_sinfo_simple="maint"} 1.000000 1676402381347

Thanks In advance 

Labels (2)
0 Karma

bowesmana
SplunkTrust
SplunkTrust

Here's an example that has one event for each of your possible states. Note that in your 'drop' case, you give the category as 'down', but I assume that is supposed to be "drop".

| makeresults
| eval _raw="slu_ne_state{instance=\"192.1x.x.x.\",job=\"exporters\",node=\"xyz\",partition=\"gryr\",state_sinfo=\"down\",state_sinfo_simple=\"maint\"} 1.000000 1676402381347
slu_ne_state{instance=\"192.1x.x.x.\",job=\"exporters\",node=\"xyz\",partition=\"gryr\",state_sinfo=\"down#\",state_sinfo_simple=\"maint\"} 1.000000 1676402381347
slu_ne_state{instance=\"192.1x.x.x.\",job=\"exporters\",node=\"xyz\",partition=\"gryr\",state_sinfo=\"down%\",state_sinfo_simple=\"maint\"} 1.000000 1676402381348
slu_ne_state{instance=\"192.1x.x.x.\",job=\"exporters\",node=\"xyz\",partition=\"gryr\",state_sinfo=\"up\",state_sinfo_simple=\"maint\"} 1.000000 1676402381349
slu_ne_state{instance=\"192.1x.x.x.\",job=\"exporters\",node=\"xyz\",partition=\"gryr\",state_sinfo=\"up*\",state_sinfo_simple=\"maint\"} 1.000000 1676402381350
slu_ne_state{instance=\"192.1x.x.x.\",job=\"exporters\",node=\"xyz\",partition=\"gryr\",state_sinfo=\"up$\",state_sinfo_simple=\"maint\"} 1.000000 1676402381351
slu_ne_state{instance=\"192.1x.x.x.\",job=\"exporters\",node=\"xyz\",partition=\"gryr\",state_sinfo=\"up^\",state_sinfo_simple=\"maint\"} 1.000000 1676402381352
slu_ne_state{instance=\"192.1x.x.x.\",job=\"exporters\",node=\"xyz\",partition=\"gryr\",state_sinfo=\"continue\",state_sinfo_simple=\"maint\"} 1.000000 1676402381353
slu_ne_state{instance=\"192.1x.x.x.\",job=\"exporters\",node=\"xyz\",partition=\"gryr\",state_sinfo=\"continue$\",state_sinfo_simple=\"maint\"} 1.000000 1676402381354
slu_ne_state{instance=\"192.1x.x.x.\",job=\"exporters\",node=\"xyz\",partition=\"gryr\",state_sinfo=\"continued\",state_sinfo_simple=\"maint\"} 1.000000 1676402381355
slu_ne_state{instance=\"192.1x.x.x.\",job=\"exporters\",node=\"xyz\",partition=\"gryr\",state_sinfo=\"continied$\",state_sinfo_simple=\"maint\"} 1.000000 1676402381356
slu_ne_state{instance=\"192.1x.x.x.\",job=\"exporters\",node=\"xyz\",partition=\"gryr\",state_sinfo=\"drop\",state_sinfo_simple=\"maint\"} 1.000000 1676402381357
slu_ne_state{instance=\"192.1x.x.x.\",job=\"exporters\",node=\"xyz\",partition=\"gryr\",state_sinfo=\"drop*\",state_sinfo_simple=\"maint\"} 1.000000 1676402381358
slu_ne_state{instance=\"192.1x.x.x.\",job=\"exporters\",node=\"xyz\",partition=\"gryr\",state_sinfo=\"drop$\",state_sinfo_simple=\"maint\"} 1.000000 1676402381359"
| eval rows=split(replace(_raw, "\n", "##"), "##")
| mvexpand rows
| rename rows as _raw
``` Up to here is just setting up a data example ```
| rex "state_sinfo=\"(?<state_sinfo>[^\"]*)"
| eval category=case(match(state_sinfo, "up[\*\$\^]?|continue[\$d]?|continied\$"), "available",
                     match(state_sinfo, "down[%#]?"), "not_available",
                     match(state_sinfo, "drop[\*\$]?"), "drop")
| stats count by category
| transpose 0 header_field=category
| fields - column
| eval "% available" = round( available / ( available + drop ) * 100 , 2)
| eval "% drained" = round( drop / (available + drop ) * 100 , 2)

As you can see, this just sets up an example based on your data, then the rex will extract the state_sinfo field.

The eval/case statement does the categorisation and the transpose turns the data around, so you can do the final calcs.

Hope this helps

0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...