Splunk Search

Eval field based on similar events or single event

jiaqya
Builder

i have  a case where i need to determine if a row has been repeated multiple times or not .

it may have 4 common value columns, but time might differ. example below..

table:

servername:data1:data2:data3:_time
server1:10:20:30:25th Sep 2020
server1:10:20:30:26th Sep 2020
server1:10:20:30:27th Sep 2020
server2:20:30:10:28thSep 2020

I need output like below with a new eval field called occurrence which should have values "mulitple", or "single" based on occurrence at different times.

can anyone help me with this, thanks..

servername:data1:data2:data3:_time:occurence
server1:10:20:30:25th Sep 2020:multiple
server1:10:20:30:26th Sep 2020:multiple
server1:10:20:30:27th Sep 2020:multiple
server2:20:30:10:28thSep 2020:single
server3:20:30:10:28thSep 2020:single

 

 

Labels (1)
0 Karma
1 Solution

aa70627
Communicator

how about something like this where you count up number of rows where your unique field occurs. In this example, i've made your unique field a combo of (servername, data1,data2, and data3) 

 

| makeresults
| eval servername="one",data1="10",data2="20",data3="30",_time="09-02-2020"
| append
[| makeresults
| eval servername="one",data1="10",data2="20",data3="30",_time="09-01-2020"]
| append
[| makeresults
| eval servername="one",data1="10",data2="20",data3="30",_time="08-31-2020"]
| append
[| makeresults
| eval servername="two",data1="1",data2="2",data3="3",_time="09-02-2020"]
| append
[| makeresults
| eval servername="three",data1="1",data2="20",data3="30",_time="09-02-2020"]
| table servername,data1,data2,data3,_time
| eval uniquefield = servername.data1.data2.data3
| eventstats count by uniquefield
| eval occurence = if(count=1, "single","multiple")
| fields - count, - uniquefield

 

View solution in original post

0 Karma

richgalloway
SplunkTrust
SplunkTrust

Perhaps the cluster command will do?

... | cluster showcount=true 
| eval occurrence=if(cluster_count=1,"single","multiple")

 

---
If this reply helps you, Karma would be appreciated.
0 Karma

jiaqya
Builder

i tried ur suggestion, didnt work , below is the code i tried..

 

| makeresults
| eval servername="one",data1="10",data2="20",data3="30",_time="09-02-2020"
| append
[| makeresults
| eval servername="one",data1="10",data2="20",data3="30",_time="09-01-2020"]
| append
[| makeresults
| eval servername="one",data1="10",data2="20",data3="30",_time="08-31-2020"]
| append
[| makeresults
| eval servername="two",data1="1",data2="2",data3="3",_time="09-02-2020"]
| append
[| makeresults
| eval servername="three",data1="1",data2="20",data3="30",_time="09-02-2020"]
| table servername,data1,data2,data3,_time
| cluster showcount=true | eval occurrence=if(cluster_count=1,"single","multiple")

0 Karma

ITWhisperer
SplunkTrust
SplunkTrust

What criteria are you using to determine single or multiple?

Given

server1:10:20:30:25th Sep 2020:multiple
server1:10:20:30:26th Sep 2020:multiple
server1:10:20:30:27th Sep 2020:multiple
server2:20:30:10:28thSep 2020:single
server3:20:30:10:28thSep 2020:single

What would these be

server2:10:20:30:26th Sep 2020:?
server3:10:20:30:27th Sep 2020:?
server2:20:30:10:27thSep 2020:?
server2:20:30:10:28thSep 2020:?
server3:20:30:10:28thSep 2020:?

Note that the last two are the same as yours but would this new data change their occurrence value?

0 Karma

jiaqya
Builder

except the _time field, rest all fields have same values . that is the criteria...

or we can just ignore the _time field to determine similar rows .

if there are more than 1 rows that are similar then i would like to mark or eval them as multiple rows of same data.

else , if there are no multiple occurences of same data rows, then show them as single row...

the last 2 are different , as one is server 2 and other is server 3 ....

hope i was clear in my explanation...

 

expected output:

servername:data1:data2:data3:_time:occurence
server1:10:20:30:25th Sep 2020:multiple
server1:10:20:30:26th Sep 2020:multiple
server1:10:20:30:27th Sep 2020:multiple
server2:20:30:10:28thSep 2020:single
server3:20:30:10:28thSep 2020:single

0 Karma

aa70627
Communicator

how about something like this where you count up number of rows where your unique field occurs. In this example, i've made your unique field a combo of (servername, data1,data2, and data3) 

 

| makeresults
| eval servername="one",data1="10",data2="20",data3="30",_time="09-02-2020"
| append
[| makeresults
| eval servername="one",data1="10",data2="20",data3="30",_time="09-01-2020"]
| append
[| makeresults
| eval servername="one",data1="10",data2="20",data3="30",_time="08-31-2020"]
| append
[| makeresults
| eval servername="two",data1="1",data2="2",data3="3",_time="09-02-2020"]
| append
[| makeresults
| eval servername="three",data1="1",data2="20",data3="30",_time="09-02-2020"]
| table servername,data1,data2,data3,_time
| eval uniquefield = servername.data1.data2.data3
| eventstats count by uniquefield
| eval occurence = if(count=1, "single","multiple")
| fields - count, - uniquefield

 

0 Karma

jiaqya
Builder

Thanks, this works perfectly fine for my case..

im already coming up with unique values for those fields before i use your solution , so they will be similar in any case. 

thank you....

0 Karma

ITWhisperer
SplunkTrust
SplunkTrust

@aa70627 Good idea although you should probably include a delimiter that doesn't appear in adjacent fields

if data1=12 and data2=345 in one row, and data1=123 and data2=45 you wouldn't be able to distinguish data1.data2 sufficiently

0 Karma

jiaqya
Builder

data is already unique when the below query is run, so it works perfectly fine for my case...

 

| eval uniquefield = servername.data1.data2.data3 | eventstats count by uniquefield | eval occurence = if(count=1, "single","multiple") | fields - count, - uniquefield 

Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In November, the Splunk Threat Research Team had one release of new security content via the Enterprise ...

Index This | Divide 100 by half. What do you get?

November 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with this ...

Stay Connected: Your Guide to December Tech Talks, Office Hours, and Webinars!

❄️ Celebrate the season with our December lineup of Community Office Hours, Tech Talks, and Webinars! ...