Splunk Search

How to deal with Logon/Logoff scatterd over multiple events

jroedel
Path Finder

I have onboarded data from a system,  that scatters actual events over many logging events. Especially successful or failed logins cause me some headache.

Successful login:
<timestamp> Connection 'id123' from '192.168.1.100' has logged onto the server

<timestamp> User 'johndoe' logged on (Connection id='id123')

[ Time passes until John eventually decides to logoff again]

<timestamp> Connection 'id123' from has logged off the server

Failed login:
<timestamp> Connection 'id123' from '192.168.1.100' has logged onto the server

<timestamp> Connection 'id123' from has logged off the server
 
Of course, I can fiddle around with transaction or even stats or whatever to list successful and failed logins or create an alert for it. However that is absolutely not elegant.

What is best practice, to get those data nicely streamlined with eventtypes and tags?

Labels (1)
Tags (2)
0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @jroedel ,

you can create an eventtype for login and one for logout, tagging these eventtypes with a related tag,

so you can use then in your searches.

but what's your requirement?

what do you need to receive as result?

Ciao.

Giuseppe

0 Karma

jroedel
Path Finder

Maybe I just do not see it: How would I apply an event type for a successfull login event, that is scattered over multiple log entries?

My requirement is, to achieve cim-comliance with this data source.

0 Karma

PickleRick
SplunkTrust
SplunkTrust

Event type cannot "merge" multiple events. As simple as that. So either process your data prior to ingesting so that you have a whole login event containing all interesting fields or do summary indexing and create synthetic events after ingesting original events.

jroedel
Path Finder

Thanks for clearifying, @PickleRick . So what would be the best practice for creating such synthetic events? A scheduled search every 5 (or so) Minutes? If yes, how to deal with:

- SH-Downtimes
- logins where only one of both needed events for a successful login is in the search time range, and the other is in the search time range of the previous run of the scheduled search

0 Karma

PickleRick
SplunkTrust
SplunkTrust

1. To deal with possible outages you schedule it with continuous schedule - that means that your search will be scheduled for each continuous time period. See

https://docs.splunk.com/Documentation/Splunk/latest/Report/Configurethepriorityofscheduledreports#Ch...

2. For this you'd typically use longer search window (and typically you'd want to search with a slight (maybe not; depending on your data) delay to account for data ingestion latency). But as with any search (not just summary-building one), if you have some data outside of your search-range you won't find it 🙂

gcusello
SplunkTrust
SplunkTrust

Hi @jroedel ,

ok, you have to create eventtypes and add to login, logout and logfail eventtypes the tag "authentication.

You should try to use the Add-On Builder app (https://splunkbase.splunk.com/app/2962) or the CIM-Vladiator app (https://splunkbase.splunk.com/app/2968) that helps you in field aliases, calculated fields and tagging.

I usually use the second one.

Ciao.

Giuseppe

0 Karma

jroedel
Path Finder

Lets for now focus on a *successful* login. As shown in my initial post, there are multiple events for the same successfull login. One does carry the username, the other carries the source ip.

On which one should I set the event type and tag? And how do I enrich that event with the field from the other one?

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @jroedel ,

eventtype and tag aren't related to the fields:

you have to create at first an eventtype for the login, called e.g. "my_technology_login":

index=my_index sourcetype=my_sourcetype ("has logged onto the server" OR "logged on"

and tag it as "Authentication" required BY CIM and "LOGIN".

then

index=my_index sourcetype=my_sourcetype (logoff OR "has logged off the server")

and tag it as "Authentication" required BY CIM and "LOGOUT".

The last sample doesn't seems to be a logfail event, please check it and apply as the others.

Then you have to extract fields: user and src using regexes.

Ciao.

Giuseppe

0 Karma
Get Updates on the Splunk Community!

New Case Study Shows the Value of Partnering with Splunk Academic Alliance

The University of Nevada, Las Vegas (UNLV) is another premier research institution helping to shape the next ...

How to Monitor Google Kubernetes Engine (GKE)

We’ve looked at how to integrate Kubernetes environments with Splunk Observability Cloud, but what about ...

Index This | How can you make 45 using only 4?

October 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with this ...