Splunk Search

How to Capture & Compare _time from two searches using append ?

sjringo
Path Finder

I am performing two searches in an attempt to calculate the duration, but am having some issues.

Here is what I have working so far. Im getting results but they are in two different rows when I see results, I was expecting for them to be in one row to be used to calculate the duration ?

What am I missing...

index=anIndex sourcetype=aSourceType (aString1 AND "START of script")

| eval startTimeRaw=_time

| append [search index=anIndex sourcetype=aSourceType (aString1 AND "COMPLETED OK")

| eval endTimeRaw=_time ]

| table startTimeRaw, endTimeRaw

Labels (3)
Tags (2)
0 Karma
1 Solution

johnhua
Builder

Group it by date.

index=anIndex sourcetype=aSourcetype aJobName ("START of script" OR "COMPLETED OK")
| eval startTimeRaw = if (match(_raw, "START of script"), _time, null()), endTimeRaw = if (match(_raw, "COMPLETED OK"), _time, null())
| eval event_date=strftime(_time,"%Y-%m-%d")
| stats range(_time) as duration values(startTimeRaw) as startTimeRaw values(endTimeRaw) as endTimeRaw by event_date host
| eval duration = tostring (duration, "duration")
| table host, duration

 

 

View solution in original post

0 Karma

FelixLeh
Communicator

You could try the following:

index=anIndex sourcetype=aSourceType

| transaction transaction_identifier startswith="START" endswith="COMPLETED OK"

| table startTimeRaw, endTimeRaw

 

The transaction_identifier has to be a value that is found in all relevant events and specific to that instance.
For example, if you want to track a purchase then the transaction_identifier could be the ID of the transaction.
Or if you want to track the session of a user you could use the username or the user's ID as the transaction_identifier.

0 Karma

sjringo
Path Finder

For this example there is no unique piece of data that identifies the unique start/end time on a per day basis.  It is the same job that runs every day and its the same job name and same start and stop entries every day.

Is there a way to make the date the unique identifier as I eventually want to show results by day over time?

I have this working when I run for a timerange of 'today' but it does not work when I run for more than one day. i.e. 'week to date'

Your example was missing 'aJobName' as I am scanning many .log files for many jobs but am currently only interested in one to get something working.

index=anIndex sourcetype=aSourcetype aJobName ("START of script" OR "COMPLETED OK")

| eval startTimeRaw = if (match(_raw, "START of script"), _time, null()), endTimeRaw   = if (match(_raw, "COMPLETED OK"), _time, null())

| stats range(_time) as duration values(startTimeRaw) as startTimeRaw values(endTimeRaw) as endTimeRaw by host

| eval duration = tostring (duration, "duration")

| table host, duration

The results for a time range of today returns results like:

Host Duration
server1 00:17:41

When I run for week to date I get the following results:
Host Duration
server1 1+00:52:02

I currently only have two days of data in Splunk but I think what is happening is that each query is returning two events, which is also why in my results I only have one row instead of two (one for 9/23 and one for 9/22) but I dont know how to change the above query or approach to be able to show duration over time on a daily basis which I eventually want to dump into a timechart to show duration over time.

0 Karma

isoutamo
SplunkTrust
SplunkTrust

As @FelixLeh said, usually there must be some transaction identified which bind those event to the same physical event. You said that in this case it is a day. So we can do it by this way

index=anIndex sourcetype=aSourceType aString1 ("START of script" OR "COMPLETED OK")
| eval startTimeRaw = if (match(_raw, "START of script"), _time, null()),
       endTimeRaw   = if (match(_raw, "COMPLETED OK"), _time, null())
| bin span=1d _time as trcId
| stats range(_time) as duration values(startTimeRaw) as startTimeRaw values(endTimeRaw) as endTimeRaw by trcId
| eval durationHuman = tostring (duration, "duration")
| table trcId, startTimeRaw, endTimeRaw

This divide events to 1 day slots and within those slots it calculate duration between start and end events. Probably you need to check that duration > 0 and if need take abs for it to get it right direction. 

0 Karma

johnhua
Builder

Group it by date.

index=anIndex sourcetype=aSourcetype aJobName ("START of script" OR "COMPLETED OK")
| eval startTimeRaw = if (match(_raw, "START of script"), _time, null()), endTimeRaw = if (match(_raw, "COMPLETED OK"), _time, null())
| eval event_date=strftime(_time,"%Y-%m-%d")
| stats range(_time) as duration values(startTimeRaw) as startTimeRaw values(endTimeRaw) as endTimeRaw by event_date host
| eval duration = tostring (duration, "duration")
| table host, duration

 

 

0 Karma

sjringo
Path Finder

Two follow up questions.

1)  Even though you suggest not using append, why does it not work ?  I have a working example using appendcols and assumed append would work similar.  In the appendcols example I use: " | stats count as X " for the first query variable then " | stats count as Y " for the second variable and then use X & Y to calculate a percentage.  But for this example trying to use eval instead of stats

 

2) The example makes sense up to "by <some id for transaction>".  What am I trying to reference here ?  From the search I have two events and two variables (startTimeRaw & endTimeRaw).

0 Karma

isoutamo
SplunkTrust
SplunkTrust

Hi

You should avoid to use append if possible. In this case you could try e.g.

index=anIndex sourcetype=aSourceType aString1 ("START of script" OR "COMPLETED OK")
| eval startTimeRaw = if (match(_raw, "START of script"), _time, null()),
       endTimeRaw   = if (match(_raw, "COMPLETED OK"), _time, null())
| stats range(_time) as duration values(startTimeRaw) as startTimeRaw values(endTimeRaw) as endTimeRaw by <some id for transaction>
| eval durationHuman = tostring (duration, "duration")
| table startTimeRaw, endTimeRaw

Use in by your field which indentify that transaction.

r. Ismo 

0 Karma
Get Updates on the Splunk Community!

Register to Attend BSides SPL 2022 - It's all Happening October 18!

Join like-minded individuals for technical sessions on everything Splunk!  This is a community-led and run ...

What's New in Splunk Cloud Platform 9.0.2208?!

Howdy!  We are happy to share the newest updates in Splunk Cloud Platform 9.0.2208! Analysts can benefit ...

Admin Console: A Single, Unified Interface for All Your Cloud Admin Needs

WATCH NOWJoin us to learn how the admin console can save you time and give you more control over the Splunk® ...