Splunk Search

Transaction with MVexpand

Bulluk
Path Finder

Hi

I previously asked this question and marked it as answered following eelisio2's response.

http://splunk-base.splunk.com/answers/37075/calculating-page-read-time

Unfortunately I've been playing with this and have worked out that this returns the entire transaction rather than the time for each step in the transaction. I think the mvexpand statement is failing and so the delta statement can only work on the whole transaction. I found this link which appears to solve my problem but I'm struggling to understand the logic sufficiently to be able to merge it into my existing query.

http://splunk-base.splunk.com/answers/4344/delta-between-timestamp-within-a-transaction-search

I'd be really grateful if someone could have a look at the link and explain to me how I can make that work with the following query

sourcetype="iis" tidy | eval steptime= _time | transaction cs_username | eval raw=split(_raw, "\n") | mvexpand raw | rename raw as _raw | sort cs_username, -steptime | streamstats count as seq by cs_username| delta steptime as StepDuration | eval StepDuration=abs(StepDuration) | eval StepDuration=if(seq=1,0,StepDuration) | search StepDuration > 40 AND StepDuration < 1800 | eventstats mean(StepDuration) as ViewTime by cs_uri_stem | convert timeformat="%M:%S" ctime(ViewTime) as ViewTime | table cs_uri_stem, ViewTime | sort -ViewTime

1 Solution

Bulluk
Path Finder

I eventually ended up with this:

sourcetype=iis | eval etime=_time | fields cs_username cs_uri_stem etime | transaction cs_username | eval tr_id=mvindex(_serial,0) | mvexpand etime | streamstats current=f global=f window=1 last(etime) as letime by tr_id | eval timediff=coalesce(etime-letime,0) | search timediff > 40 AND timediff < 1800
| rename cs_uri_stem as Page | stats mean(timediff) as ViewTime by Page | convert timeformat="%M:%S" ctime(ViewTime) as ViewTime | sort -ViewTime

The logic is as follows:

  • pass in your transactions. In my real search I clean up any service accounts and remove none aspx pages, but I've skipped all that for simplicity
  • Copy the time field. This is because the upcoming transaction command tags all entries with a single time stamp
  • Throw away the fields that we don't need. I did this as I assumed that carrying them through the rest of the search would add processing time but I'm not certain that this is valid
  • The transaction command finds all events for each user
  • We then give each transaction an ID
  • Expand etime as it's currently a multivalued event
  • streamstats works out the start and finish time of each event pair in each transaction
  • We then work out the difference between these to get the time between clicks
  • I chose to throw away anything under 40 seconds (as it's probably a user just clicking around) and over 10 minutes (as the user's probably left their browser open and gone to do other stuff)
  • Simply giving the page a human friendly name in readiness for the report
  • Now work out the mean average view time of each page
  • Turn the view time into minutes and seconds
  • Sort by the length of time the page was viewed.

I've borrowed heavily from the links in my original question and I'm a splunk newbie so there may well be bugs

View solution in original post

Bulluk
Path Finder

I eventually ended up with this:

sourcetype=iis | eval etime=_time | fields cs_username cs_uri_stem etime | transaction cs_username | eval tr_id=mvindex(_serial,0) | mvexpand etime | streamstats current=f global=f window=1 last(etime) as letime by tr_id | eval timediff=coalesce(etime-letime,0) | search timediff > 40 AND timediff < 1800
| rename cs_uri_stem as Page | stats mean(timediff) as ViewTime by Page | convert timeformat="%M:%S" ctime(ViewTime) as ViewTime | sort -ViewTime

The logic is as follows:

  • pass in your transactions. In my real search I clean up any service accounts and remove none aspx pages, but I've skipped all that for simplicity
  • Copy the time field. This is because the upcoming transaction command tags all entries with a single time stamp
  • Throw away the fields that we don't need. I did this as I assumed that carrying them through the rest of the search would add processing time but I'm not certain that this is valid
  • The transaction command finds all events for each user
  • We then give each transaction an ID
  • Expand etime as it's currently a multivalued event
  • streamstats works out the start and finish time of each event pair in each transaction
  • We then work out the difference between these to get the time between clicks
  • I chose to throw away anything under 40 seconds (as it's probably a user just clicking around) and over 10 minutes (as the user's probably left their browser open and gone to do other stuff)
  • Simply giving the page a human friendly name in readiness for the report
  • Now work out the mean average view time of each page
  • Turn the view time into minutes and seconds
  • Sort by the length of time the page was viewed.

I've borrowed heavily from the links in my original question and I'm a splunk newbie so there may well be bugs

Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...