Splunk Enterprise

How to generate one notable for multiple events?

st1
Path Finder

This is the correlation search I currently have

 

 

index=honeypot sourcetype=cowrie 
| table _time, username, src_ip, eventid, message 
| where eventid!="cowrie.log.closed" 
| where src_ip!="10.11.13.29"

 

 

st1_0-1679930280055.png

st1_1-1679930293434.png

 

Example events:

_time username src_ip eventid message
2023-03-22 14:25:43   10.12.8.180 hny.command.input CMD: exit
2023-03-22 14:25:41 root 10.12.8.180 hny.login.success login attempt [root/admin] succeeded
2023-03-22 14:25:38   10.12.8.180 hny.session.connect New connection: 10.12.8.180:2303 (10.11.131.199:2222) [session: 520a4f7b0870]
2023-03-22 14:25:00   10.12.8.180 hny.command.input CMD:
2023-03-22 14:25:00   10.12.8.180 hny.command.input CMD:

 

The correlation search runs every hour and, for the example events shown above, the search is putting out 5 of the same notables (one for each event). How can I have only one notable for each hour? I tried using stats and counting by src_ip but that only returns the fields that have a username.

Labels (2)
Tags (1)
0 Karma
1 Solution

yeahnah
Motivator

Hi @st1 

Not having a username, or the username being null, will not stop stats counting the results rows by src_ip, so maybe there was something wrong with you original query.

Anyway, here's an a run anywhere example using your sample events provided that groups the results by src_ip.  It includes an option to fill in a null username, but this is not required.

| makeresults 
| eval _raw="time,username,src_ip,eventid,message
2023-03-22 14:25:43,,10.12.8.180,hny.command.input,CMD: exit
2023-03-22 14:25:41,root,10.12.8.180,hny.login.success,login attempt [root/admin] succeeded
2023-03-22 14:25:38,,10.12.8.180,hny.session.connect,New connection: 10.12.8.180:2303 (10.11.131.199:2222) [session: 520a4f7b0870]
2023-03-22 14:25:00,,10.12.8.180,hny.command.input,CMD:
2023-03-22 14:25:00,,10.12.8.180,hny.command.input,CMD:"
| multikv forceheader=1
| eval _time=strptime(time, "%F %T")
| table _time username src_ip eventid message
 ``` create dummy events above ```
 ``` do SPL the below ```
| fillnull username value="null"
| eval time=strftime(_time, "%F %T")
| stats list(*) AS * BY src_ip


Side note, it's generally more efficient to filter out data in the base search, e.g.

index=honeypot sourcetype=cowrie NOT (eventid="cowrie.log.closed" OR src_ip="10.11.13.29")
| fillnull username value="null"
| eval time=strftime(_time, "%F %T")
| stats list(*) AS * BY src_ip

 Hope this helps

View solution in original post

yeahnah
Motivator

Hi @st1 

Not having a username, or the username being null, will not stop stats counting the results rows by src_ip, so maybe there was something wrong with you original query.

Anyway, here's an a run anywhere example using your sample events provided that groups the results by src_ip.  It includes an option to fill in a null username, but this is not required.

| makeresults 
| eval _raw="time,username,src_ip,eventid,message
2023-03-22 14:25:43,,10.12.8.180,hny.command.input,CMD: exit
2023-03-22 14:25:41,root,10.12.8.180,hny.login.success,login attempt [root/admin] succeeded
2023-03-22 14:25:38,,10.12.8.180,hny.session.connect,New connection: 10.12.8.180:2303 (10.11.131.199:2222) [session: 520a4f7b0870]
2023-03-22 14:25:00,,10.12.8.180,hny.command.input,CMD:
2023-03-22 14:25:00,,10.12.8.180,hny.command.input,CMD:"
| multikv forceheader=1
| eval _time=strptime(time, "%F %T")
| table _time username src_ip eventid message
 ``` create dummy events above ```
 ``` do SPL the below ```
| fillnull username value="null"
| eval time=strftime(_time, "%F %T")
| stats list(*) AS * BY src_ip


Side note, it's generally more efficient to filter out data in the base search, e.g.

index=honeypot sourcetype=cowrie NOT (eventid="cowrie.log.closed" OR src_ip="10.11.13.29")
| fillnull username value="null"
| eval time=strftime(_time, "%F %T")
| stats list(*) AS * BY src_ip

 Hope this helps

Get Updates on the Splunk Community!

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...

Cloud Platform & Enterprise: Classic Dashboard Export Feature Deprecation

As of Splunk Cloud Platform 9.3.2408 and Splunk Enterprise 9.4, classic dashboard export features are now ...