Splunk Search

How to search for the "Log In" and "Log Out" event when they appear to have the same pattern in my logs?

LNebout
Path Finder

Hi everybody,
I have a problem with a log file to search the log In and log Out event.
pattern :

Line 2 --> Mar 1 21:47:05 XDSauth: 1488433625 |ServiceHdlr.serviceTerminated |next service = X
Line 1 --> Mar 1 21:45:41 XDSauth: 1488433541 |ServiceHdlr.serviceTerminated |next service = X

( XDSauth: --> Epoch Time , I converted the format : | fieldformat Epoch_Time = strftime(Epoch_Time, "%F %T.%3N") )

I search a solution to tag the Line 1 --> Log IN and Line 2 --> Log OUT

If anybody has an idea or advice , I'm more than interested ^_^
Thanks,
Laurent

0 Karma
1 Solution

woodcock
Esteemed Legend

OK, let's assume that login sessions are short. This means that if the distance between the last 2 events is less than 8 hours, then the last event is a logout and the one before is a login. If the distance between the last 2 is more than 8 hours, then the last event is a login and the one before is a logout.

Here is code that fakes some events for testing:

| makeresults count=3 
| eval raw="XDSauth: 1488433625 |ServiceHdlr.serviceTerminated |next service =1" 
| streamstats count AS _serial 
| eval _time = _time - _serial 
| append 
    [| makeresults count=5 
    | streamstats count AS _serial 
    | eventstats count AS _total 
    | eval raw="XDSauth: 1488433625 |ServiceHdlr.serviceTerminated |next service =" . round(_serial%(_total/2)) 
    | eval _time = _time - (3000 * _serial * _serial)] 
| rex field=raw "next service\s*=\s*(?<user>.*)$" 

Then add something like this (tune to fit) to assign the appropriate action values:

| eventstats max(_time) AS recent_time BY user 
| eval recent_time=if((_time=recent_time), recent_time, "NULL") 
| eval secondsSinceLastEvent = now() - recent_time 
| streamstats count(recent_time) AS _count count AS serial BY user 
| eval next_recent_time=if((_count=2), _time, null()) 
| eventstats first(recent_time) AS _first_recent_time first(next_recent_time) AS _first_next_recent_time BY user 
| eval secondsBetweenLastTwoEvents = _first_recent_time - _first_next_recent_time 
| eval action=case((serial>1), null(),
    (secondsSinceLastEvent > secondsBetweenLastTwoEvents), "Login",
    (secondsBetweenLastTwoEvents > (8*60*60)), "Login",
    true(), "Logout") 
| eventstats last(action) AS last_action by user 
| eval action=case(isnotnull(action), action,
    (((serial%2=1) AND (last_action="Login")) OR ((serial%2=0) AND (last_action="Logout"))), "Login",
    true(), "Logout")
| fields - serial seconds* last* *recent_time

View solution in original post

woodcock
Esteemed Legend

OK, let's assume that login sessions are short. This means that if the distance between the last 2 events is less than 8 hours, then the last event is a logout and the one before is a login. If the distance between the last 2 is more than 8 hours, then the last event is a login and the one before is a logout.

Here is code that fakes some events for testing:

| makeresults count=3 
| eval raw="XDSauth: 1488433625 |ServiceHdlr.serviceTerminated |next service =1" 
| streamstats count AS _serial 
| eval _time = _time - _serial 
| append 
    [| makeresults count=5 
    | streamstats count AS _serial 
    | eventstats count AS _total 
    | eval raw="XDSauth: 1488433625 |ServiceHdlr.serviceTerminated |next service =" . round(_serial%(_total/2)) 
    | eval _time = _time - (3000 * _serial * _serial)] 
| rex field=raw "next service\s*=\s*(?<user>.*)$" 

Then add something like this (tune to fit) to assign the appropriate action values:

| eventstats max(_time) AS recent_time BY user 
| eval recent_time=if((_time=recent_time), recent_time, "NULL") 
| eval secondsSinceLastEvent = now() - recent_time 
| streamstats count(recent_time) AS _count count AS serial BY user 
| eval next_recent_time=if((_count=2), _time, null()) 
| eventstats first(recent_time) AS _first_recent_time first(next_recent_time) AS _first_next_recent_time BY user 
| eval secondsBetweenLastTwoEvents = _first_recent_time - _first_next_recent_time 
| eval action=case((serial>1), null(),
    (secondsSinceLastEvent > secondsBetweenLastTwoEvents), "Login",
    (secondsBetweenLastTwoEvents > (8*60*60)), "Login",
    true(), "Logout") 
| eventstats last(action) AS last_action by user 
| eval action=case(isnotnull(action), action,
    (((serial%2=1) AND (last_action="Login")) OR ((serial%2=0) AND (last_action="Logout"))), "Login",
    true(), "Logout")
| fields - serial seconds* last* *recent_time

LNebout
Path Finder

@woodcock
Thanks , I try this.
But I just received a new log file from the partner.... I hope is not different.....
I try your solution and i send a feedback quickly.
Best regards,
ありがとうございます
laurent

0 Karma

woodcock
Esteemed Legend

The solution uses some logic to assign the most recent event a designation of either Login or Logout and from there you just move backwards in time flipping back and forth.

0 Karma

LNebout
Path Finder

That's work perfect but with the new log file, I don't use this system.

sourcetype=XDSauth  | fieldformat Epoch_Time = strftime(Epoch_Time, "%F %T.%3N")  | rex field=_raw ".*\d+\|.+\|\d+\|(?<fld_key>[^ ]+)\s.*" | where User="WIN7-007" |table User, Status , Epoch_Time  , fld_key | 

User Status Epoch_Time fld_key
WIN7-007 ServiceHdlr.serviceTerminated 2017-03-02 14:47:05.000 145c414
WIN7-007 ServiceHdlr.serviceTerminated 2017-03-02 14:45:41.000 3a4822
WIN7-007 ConnectorSession.setNextServiceName 2017-03-02 14:45:39.000 3a4822

I want to do that :
IF ConnectorSession.setNextServiceName.fld_key = ServiceHdlr.serviceTerminated.fld_key THEN
Delete "WIN7-007 ServiceHdlr.serviceTerminated 2017-03-02 14:45:41.000 3a4822"

Result:
User Status Epoch_Time fld_key
WIN7-007 ServiceHdlr.serviceTerminated 2017-03-02 14:47:05.000 145c414

WIN7-007 ConnectorSession.setNextServiceName 2017-03-02 14:45:39.000 3a4822

Do you think that's possible to perform that ?

[ if it's possible after, I do --> | eval FinalStatus = case (Status="ConnectorSession.setNextServiceName","LOGIN") | eval FinalStatus = case (Status="ServiceHdlr.serviceTerminated","LOGOUT") ]

0 Karma

woodcock
Esteemed Legend

I am confused; we deep enough down this trail that starting over merits an entirely new question. That is what I would do.

0 Karma

LNebout
Path Finder

Hi everybody,
Thanks for your answers.
The partner of my company send me a new log file with more details.....
i do apologise for the inconvenience。 本当にごめんなさい!!!
On the new log file , I have an event to define the beginning :
LOG IN -- > Mar 1 21:45:41 XDSauth: 1488433541 |ConnectorSession.setNextServiceName |next service name = X
Mar 1 21:45:41 XDSauth: 1488433541 |ServiceHdlr.serviceTerminated |next service = X
LOG OUT --> Mar 1 21:47:05 XDSauth: 1488433625 |ServiceHdlr.serviceTerminated |next service = X
Where X is the user name.

But the problem is : ServiceHdlr.serviceTerminated
I have this event twice.
One after ConnectorSession.setNextServiceName and the next is to define the end of connection.

If I use
startswith="ConnectorSession.setNextServiceName" endswith="ServiceHdlr.serviceTerminated"
The result is :
2017-03-02 14:45:39.000 ConnectorSession.setNextServiceName
2017-03-02 14:45:41.000 ServiceHdlr.serviceTerminated

But the end of the connection is : 2017-03-02 14:46:13.000

With this new log file, that's more easier to define the beginning but I have still a problem to define the End.

Maybe create a condition to select the second event "ServiceHdlr.serviceTerminated" ?

Thanks
Laurent

0 Karma

DalJeanis
Legend

So, there is literally no difference between login and logout records. That is a design or code error at the source, your company's partner. The logon records should say something like "ServiceHdlr.serviceStarted"

1) get rid of the startswith and endswith. Since the same wording is on both of them, you're only confusing splunk.

2) determine what a reasonable duration would be for a session, and set that as your maxspan value. leave maxpause as default (forever). set keeporphans=true.

So, it looks like this -

   | transaction User maxevents=2 maxspan=2h keeporphans=T
   | eval StartTime=_time, eval EndTime=_time+duration
   | table User, StartTime, EndTime, duration
   | (reformat Times and duration how you want them)

DalJeanis
Legend

It is highly unlikely that these events are actually a login and logout without additional data.

There is no userid or other data on what is being logged into or out of, how, by whom.

What is the form of the data in the area marked X? What is the actual format of the _raw data?

0 Karma

LNebout
Path Finder

@DalJeanis
Hello Daljeanis,
Sorry , I forgot an information :
Mar 1 21:45:41 XDSauth: 1488433541 |ServiceHdlr.serviceTerminated |next service = X

next service = X --> user name

Log format --> raw data ( linux log)

A good challenge lol
Thanks,
Laurent

0 Karma

woodcock
Esteemed Legend

Are you sure that these are the correct log texts? They appear to be perfectly identical with nothing to distinguish a login from a logout. How is this distinguished?

0 Karma

LNebout
Path Finder

Hi @woodcock ,

Yes , this is the good logs.
That's the problem. lol
The only difference is the time. The module where this data drawn from war created by a partner of my company.
And they send me a log file for example. I have only this lines with the user name to define the LOG IN and LOG OUT

I try this to isolate :
sourcetype=XDSauth | fieldformat Epoch_Time = strftime(Epoch_Time, "%F %T.%3N") | transaction User maxevents=2 startswith="ServiceHdlr.serviceTerminated" endswith="ServiceHdlr.serviceTerminated" | eval t=Epoch_Time+duration | convert timeformat="%y-%m-%d %H:%M:%S" ctime(t) AS TIME | table User,TIME

Result:
User name Time
007 17-03-02 14:47:05
007 17-03-02 14:45:41

Best regards
Laurent

0 Karma
Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...