Splunk Search

Transaction based on a field

Path Finder

I have a database that stores a separate event every time someone starts or stops a task, and includes several fields, one of which is their time logged in. I want to be able to calculate the time they've spent on the task not using actual time, but instead on how long they have been logged into the system, using am integer to count the minutes. I asked a similar question before, but for the actual time, and came up with this search:

index=zos_gameplay task="*" NOT (action="2" OR action="3") | transaction user_id, task startswith="action=0" endswith=action=1 maxevents="2" | eval duration=round(duration/60, 4) | stats min(duration), median(duration), max(duration) by task_name | rename task_name AS "Task Name", min(duration) AS "Shortest Time to Complete", median(duration) as "Median Time to Complete", max(duration) AS "Longest Time to Complete" | sort -median(duration)

Is there a way to manipulate transaction to calculate time based off of a field instead of the timestamp? or is there a different way to go about this?

The field I want to use is called logged_time.

Tags (1)
0 Karma

Legend

You could trick transaction into reading "your" time instead of the event's actual time simply by modifying the _time field.

... | eval _time=logged_time

Path Finder

Sorry, I should have been more specific. We are using an integer field to store logged_time in minutes. Since this is a count of how much time they have spent logged in, it is understandably very small compared to UNIX time...

0 Karma

Legend

OK, so your time field is not...a valid time field? I can't say how exactly that would affect performance, but it's probably a very good idea to make sure your time field is actually understandable by Splunk (meaning it should be a correct UNIX epoch number).

0 Karma

Path Finder

well, it's working, but it's extremely slow... probably because Splnk is looking at them in Unix time, so they appear as events back in 1970.

0 Karma