Splunk Search

How to add a table column that does operations in each cell based on the values from another column2?

HattrickNZ
Motivator

asked a similar question here but here it is slightly different and here

if I have a search that gives me something like this:

a b c
1 2 3
4 5 6
7 8 9
how do I add a column d that would do an operation (9/6)-1= 0.5 working on column c. ideally I am looking to get the percentage value be it 0.5, 50 or 50%.

a b c d

1 2 3 (3/0)-1=Na
4 5 6 (6/3)-1= 1
7 8 9 (9/6)-1= 0.5

I am not sure if using delta here is the best way to go. Or should I use head or tail?

here is a test search I am working with
| makeresults count=3 | streamstats count as a | eval a=a+1 | streamstats count as b | eval b=b+10 | streamstats count as c | eval c=c+11 | delta a as a_dif p=1 |

Can anyone advise?

0 Karma
1 Solution

woodcock
Esteemed Legend

Like this:

|noop|stats count AS a|eval a="1:4:7"|makemv delim=":" a|mvexpand a|eval b=a+1|eval c=b+1
| rename Comment AS "above is for spoofing fake data; below is the solution"
| autoregress c AS d p=1 | fillnull d value=0|eval d=round((c/d)-1,2)

View solution in original post

woodcock
Esteemed Legend

Like this:

|noop|stats count AS a|eval a="1:4:7"|makemv delim=":" a|mvexpand a|eval b=a+1|eval c=b+1
| rename Comment AS "above is for spoofing fake data; below is the solution"
| autoregress c AS d p=1 | fillnull d value=0|eval d=round((c/d)-1,2)

javiergn
Super Champion

Try this:

your search here
| streamstats values(c) as d window=1 current=f
| eval d = c/d - 1

You can then use fillnull for those Na values you don't want to display as blank

woodcock
Esteemed Legend

I would use | streamstats current=f last(c) as d

0 Karma

HattrickNZ
Motivator

tks. is there a similar parameter as p=1 in the autoregress func below that controls how many values back to use, that can be used in the streamstats dexcribed here? if not autoregress is probably better as it provides that option

0 Karma

woodcock
Esteemed Legend

Yes: window=

0 Karma

HattrickNZ
Motivator

hmm not sure.

|noop|stats count AS a|eval a="1:4:7"|makemv delim=":" a|mvexpand a|eval b=a+1|eval c=b+1
     | rename Comment AS "above is for spoofing fake data; below is the solution" | streamstats values(c) as d current=f window=2

give a different result to this

|noop|stats count AS a|eval a="1:4:7"|makemv delim=":" a|mvexpand a|eval b=a+1|eval c=b+1
     | rename Comment AS "above is for spoofing fake data; below is the solution"
     | autoregress c AS d p=2 
0 Karma

woodcock
Esteemed Legend

I did not say it was identical; it provides similar capability that can be exploited. The window option is actually more flexible but takes more adjustment. You definitely should NOT be using values as indicated by the fact that none of the rest of us are using it. Try this:

|noop|stats count AS a|eval a="1:4:7"|makemv delim=":" a|mvexpand a|eval b=a+1|eval c=b+1
| rename Comment AS "above is for spoofing fake data; below is the solution"
| streamstats first(c) as d current=f window=2

Yes, I know it isn't exactly the same but the proof of concept is there.

HattrickNZ
Motivator

tks I get it with these 2 examples, which are very similar with the offseting in column d.

|noop|stats count AS a|eval a="1:4:7:8:9:10:11:12:13"|makemv delim=":" a|mvexpand a|eval b=a+1|eval c=b+1
 | rename Comment AS "above is for spoofing fake data; below is the solution"
 | streamstats first(c) as d current=f window=3

2nd

|noop|stats count AS a|eval a="1:4:7:8:9:10:11:12:13"|makemv delim=":" a|mvexpand a|eval b=a+1|eval c=b+1
 | rename Comment AS "above is for spoofing fake data; below is the solution"
 | autoregress c AS d p=3
0 Karma

HattrickNZ
Motivator

not sure if i have to use streamstats
| makeresults count=3 | streamstats count as a | eval a=a+1 | streamstats count as b | eval b=b+10 | streamstats count as c | eval c=c+11 | delta a as a_dif p=1 | streamstats values(a) as a_vals

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

Thanks for the Memories! Splunk University, .conf25, and our Community

Thank you to everyone in the Splunk Community who joined us for .conf25, which kicked off with our iconic ...

Data Persistence in the OpenTelemetry Collector

This blog post is part of an ongoing series on OpenTelemetry. What happens if the OpenTelemetry collector ...

Introducing Splunk 10.0: Smarter, Faster, and More Powerful Than Ever

Now On Demand Whether you're managing complex deployments or looking to future-proof your data ...