Splunk Search

Percentage calculation based on derived value and show it is as new row

deepa_purushoth
Engager

My data looks something like below, here first two rows are indexed data and 3,4th rows are derived data and added as new rows after some manipulation. My question is i want to add one more new row as by calculating percentage. for example - between rows 2 & 3 or 3&4 .

column1| column2
XXX| 10
YYY|20
abc|30
efg|10

Outcome something like
column1| column2
XXX| 10
YYY|20
abc|30
efg|10
per|33 ( ie., (column1=efg/column1=abc)*100

Please help. thanks

Tags (1)
0 Karma

niketn
Legend

@deepa_purushothaman, for the description provided, I am expecting two columns and four rows in your data, where you want percent to be calculated based on 3rd and 4th row. Try the following run anywhere which mimics your data as per question. Commands till | table column1 column2 generate sample data, you would need to plug in your existing query which generates first table as per your question.

| makeresults
| eval data="XXX|10;YYY|20;abc|30;efg|10"
| makemv data delim=";"
| mvexpand data
| eval data=split(data,"|")
| eval column1=mvindex(data,0)
| eval column2=mvindex(data,1)
| table column1 column2
| transpose header_field=column1 column_name=column1
| eval perc=round((efg/abc)*100,1)
| transpose header_field=column1 column_name=column1

PS: This example uses transpose to invert the sample table in the question and perform calculation as needed. If you can share your current search query community might be able to assist you with better query by avoiding multiple transpose. Ideally you perform event related calculation in the same row and then convert to column if required.

____________________________________________
| makeresults | eval message= "Happy Splunking!!!"
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

Thanks for the Memories! Splunk University, .conf25, and our Community

Thank you to everyone in the Splunk Community who joined us for .conf25, which kicked off with our iconic ...

Data Persistence in the OpenTelemetry Collector

This blog post is part of an ongoing series on OpenTelemetry. What happens if the OpenTelemetry collector ...

Introducing Splunk 10.0: Smarter, Faster, and More Powerful Than Ever

Now On Demand Whether you're managing complex deployments or looking to future-proof your data ...