Splunk Search

Join consecutive events in same index

New Member

Hi,
I thought this would be easy but no!
I'm doing the query below on the Sample data below but the FileTime_END value I'm getting is 15:29:00 for every line (don't have enough points to attach images or post links sorry).
What am I doing wrong? Also is there a better way to do this without a JOIN?

Cheers guys.

index="file-times" State=Start | table Service FileTime State | rename Service as Service_Start FileTime as Filetime_Start, State as State_Start
| join Service type=outer [search index="file-times" State=END | table FileTime State | rename FileTime as Filetime_End, State as State_End]

Sample Log/Index Data:
23/07/2019 09:34:00, Service=RR, FileTime=9:33:45, State=START
23/07/2019 10:31:00, Service=RR, FileTime=10:30:45, State=END
23/07/2019 11:01:00, Service=HHR, FileTime=11:00:32, State=START
23/07/2019 11:31:00, Service=HHR, FileTime=11:30:32, State=END
23/07/2019 12:01:00, Service=LPE, FileTime=12:00:32, State=START
23/07/2019 12:55:00, Service=LPE, FileTime=12:54:32, State=END
23/07/2019 12:01:00, Service=SMURF, FileTime=12:00:32, State=START
23/07/2019 13:01:00, Service=SMURF, FileTime=13:00:00, State=END
23/07/2019 14:00:00, Service=TEST, FileTime=14:05:00, State=START
23/07/2019 14:30:00, Service=TEST, FileTime=14:29:00, State=END
23/07/2019 15:00:00, Service=TEST1, FileTime=15:05:00, State=START
23/07/2019 15:30:00, Service=TEST1, FileTime=15:29:00, State=END

0 Karma

Esteemed Legend

Like this:

| makeresults 
| eval raw="time=23/07/2019T09:34:00,Service=RR,FileTime=9:33:45,State=START time=23/07/2019T10:31:00,Service=RR,FileTime=10:30:45,State=END time=23/07/2019T11:01:00,Service=HHR,FileTime=11:00:32,State=START time=23/07/2019T11:31:00,Service=HHR,FileTime=11:30:32,State=END time=23/07/2019T12:01:00,Service=LPE,FileTime=12:00:32,State=START time=23/07/2019T12:55:00,Service=LPE,FileTime=12:54:32,State=END time=23/07/2019T12:01:00,Service=SMURF,FileTime=12:00:32,State=START time=23/07/2019T13:01:00,Service=SMURF,FileTime=13:00:00,State=END time=23/07/2019T14:00:00,Service=TEST,FileTime=14:05:00,State=START time=23/07/2019T14:30:00,Service=TEST,FileTime=14:29:00,State=END time=23/07/2019T15:00:00,Service=TEST1,FileTime=15:05:00,State=START time=23/07/2019T15:30:00,Service=TEST1,FileTime=15:29:00,State=END"
| makemv raw
| mvexpand raw
| rename raw AS _raw
| kv
| eval _time = strptime(time, "%d/%m/%YT%H:%M:%S")
| sort 0 - _time

| rename COMMENT AS "Everything above generates sample event data; everything below is your solution"

| reverse
| stats list(*) AS * BY Service
0 Karma

New Member

Hi woodcock,
Sorry mate I'm pretty new at this. Could you explain what your query does a little please?
It looks like you're creating some raw data to query but this would be coming from a daily log file in my case.
Cheers
Nathan

0 Karma

Esteemed Legend

See line 10? It explains it. You replace everything through line 10 with your main search and then use lines 12-13 to do what you were asking.

0 Karma

New Member

Thanks! I will try this. The production index will have multiple days data in it. To get min and max filetime by service for each day I just add a date field to the end of the query? I.e “by Service, Date” instead of just “by Service” ?

0 Karma

Champion

Try this!

index="file-times"
|stats max(FILETIME) as Filetime_End,min(FILETIME) as Filetime_Start by Service

You can also use the transaction command.

0 Karma

New Member

Thanks! I will try this. The production index will have multiple days data in it. To get min and max filetime by service for each day I just add a date field to the end of the query? I.e “by Service, Date” instead of just “by Service” ?

0 Karma

Champion

index="file-times"
|eval date=substr(FILETIME,1,10)
|stats max(FILETIME) as Filetime_End,min(FILETIME) as Filetime_Start by Service,date

0 Karma