Splunk Search

Eval field based on similar events or single event

jiaqya
Builder

i have  a case where i need to determine if a row has been repeated multiple times or not .

it may have 4 common value columns, but time might differ. example below..

table:

servername:data1:data2:data3:_time
server1:10:20:30:25th Sep 2020
server1:10:20:30:26th Sep 2020
server1:10:20:30:27th Sep 2020
server2:20:30:10:28thSep 2020

I need output like below with a new eval field called occurrence which should have values "mulitple", or "single" based on occurrence at different times.

can anyone help me with this, thanks..

servername:data1:data2:data3:_time:occurence
server1:10:20:30:25th Sep 2020:multiple
server1:10:20:30:26th Sep 2020:multiple
server1:10:20:30:27th Sep 2020:multiple
server2:20:30:10:28thSep 2020:single
server3:20:30:10:28thSep 2020:single

 

 

Labels (1)
0 Karma
1 Solution

aa70627
Communicator

how about something like this where you count up number of rows where your unique field occurs. In this example, i've made your unique field a combo of (servername, data1,data2, and data3) 

 

| makeresults
| eval servername="one",data1="10",data2="20",data3="30",_time="09-02-2020"
| append
[| makeresults
| eval servername="one",data1="10",data2="20",data3="30",_time="09-01-2020"]
| append
[| makeresults
| eval servername="one",data1="10",data2="20",data3="30",_time="08-31-2020"]
| append
[| makeresults
| eval servername="two",data1="1",data2="2",data3="3",_time="09-02-2020"]
| append
[| makeresults
| eval servername="three",data1="1",data2="20",data3="30",_time="09-02-2020"]
| table servername,data1,data2,data3,_time
| eval uniquefield = servername.data1.data2.data3
| eventstats count by uniquefield
| eval occurence = if(count=1, "single","multiple")
| fields - count, - uniquefield

 

View solution in original post

0 Karma

richgalloway
SplunkTrust
SplunkTrust

Perhaps the cluster command will do?

... | cluster showcount=true 
| eval occurrence=if(cluster_count=1,"single","multiple")

 

---
If this reply helps you, Karma would be appreciated.
0 Karma

jiaqya
Builder

i tried ur suggestion, didnt work , below is the code i tried..

 

| makeresults
| eval servername="one",data1="10",data2="20",data3="30",_time="09-02-2020"
| append
[| makeresults
| eval servername="one",data1="10",data2="20",data3="30",_time="09-01-2020"]
| append
[| makeresults
| eval servername="one",data1="10",data2="20",data3="30",_time="08-31-2020"]
| append
[| makeresults
| eval servername="two",data1="1",data2="2",data3="3",_time="09-02-2020"]
| append
[| makeresults
| eval servername="three",data1="1",data2="20",data3="30",_time="09-02-2020"]
| table servername,data1,data2,data3,_time
| cluster showcount=true | eval occurrence=if(cluster_count=1,"single","multiple")

0 Karma

ITWhisperer
SplunkTrust
SplunkTrust

What criteria are you using to determine single or multiple?

Given

server1:10:20:30:25th Sep 2020:multiple
server1:10:20:30:26th Sep 2020:multiple
server1:10:20:30:27th Sep 2020:multiple
server2:20:30:10:28thSep 2020:single
server3:20:30:10:28thSep 2020:single

What would these be

server2:10:20:30:26th Sep 2020:?
server3:10:20:30:27th Sep 2020:?
server2:20:30:10:27thSep 2020:?
server2:20:30:10:28thSep 2020:?
server3:20:30:10:28thSep 2020:?

Note that the last two are the same as yours but would this new data change their occurrence value?

0 Karma

jiaqya
Builder

except the _time field, rest all fields have same values . that is the criteria...

or we can just ignore the _time field to determine similar rows .

if there are more than 1 rows that are similar then i would like to mark or eval them as multiple rows of same data.

else , if there are no multiple occurences of same data rows, then show them as single row...

the last 2 are different , as one is server 2 and other is server 3 ....

hope i was clear in my explanation...

 

expected output:

servername:data1:data2:data3:_time:occurence
server1:10:20:30:25th Sep 2020:multiple
server1:10:20:30:26th Sep 2020:multiple
server1:10:20:30:27th Sep 2020:multiple
server2:20:30:10:28thSep 2020:single
server3:20:30:10:28thSep 2020:single

0 Karma

aa70627
Communicator

how about something like this where you count up number of rows where your unique field occurs. In this example, i've made your unique field a combo of (servername, data1,data2, and data3) 

 

| makeresults
| eval servername="one",data1="10",data2="20",data3="30",_time="09-02-2020"
| append
[| makeresults
| eval servername="one",data1="10",data2="20",data3="30",_time="09-01-2020"]
| append
[| makeresults
| eval servername="one",data1="10",data2="20",data3="30",_time="08-31-2020"]
| append
[| makeresults
| eval servername="two",data1="1",data2="2",data3="3",_time="09-02-2020"]
| append
[| makeresults
| eval servername="three",data1="1",data2="20",data3="30",_time="09-02-2020"]
| table servername,data1,data2,data3,_time
| eval uniquefield = servername.data1.data2.data3
| eventstats count by uniquefield
| eval occurence = if(count=1, "single","multiple")
| fields - count, - uniquefield

 

0 Karma

jiaqya
Builder

Thanks, this works perfectly fine for my case..

im already coming up with unique values for those fields before i use your solution , so they will be similar in any case. 

thank you....

0 Karma

ITWhisperer
SplunkTrust
SplunkTrust

@aa70627 Good idea although you should probably include a delimiter that doesn't appear in adjacent fields

if data1=12 and data2=345 in one row, and data1=123 and data2=45 you wouldn't be able to distinguish data1.data2 sufficiently

0 Karma

jiaqya
Builder

data is already unique when the below query is run, so it works perfectly fine for my case...

 

| eval uniquefield = servername.data1.data2.data3 | eventstats count by uniquefield | eval occurence = if(count=1, "single","multiple") | fields - count, - uniquefield 

Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.

Can’t make it to .conf25? Join us online!

Get Updates on the Splunk Community!

Community Content Calendar, September edition

Welcome to another insightful post from our Community Content Calendar! We're thrilled to continue bringing ...

Splunkbase Unveils New App Listing Management Public Preview

Splunkbase Unveils New App Listing Management Public PreviewWe're thrilled to announce the public preview of ...

Leveraging Automated Threat Analysis Across the Splunk Ecosystem

Are you leveraging automation to its fullest potential in your threat detection strategy?Our upcoming Security ...