Splunk Search

totals for a transaction

David_Hodgson
Engager

I have a system with customers interacting with a catalogue, stepping through the menus, searching etc. I can chunk these into transactions using user ID & time period (max 7 mins, max pause 1 min).

I've got it as far as combining the numbers into a count of each type of system request by transaction, and the server resources used by system request type, all on a single line (using transaction with mvlist=t, then mvzip -> mvexpand as explained link:here(http://docs.splunk.com/Documentation/Splunk/6.3.3/SearchReference/Mvexpand "mvexpand"), then chart over _time by system request type).

I'm stuck on 2 final steps:
- how to preserve one and only one copy of the duration & eventcount from the transaction into the final row
- how to create total_count & total_process_time from the count* & process_time* fields across the row

Can anyone point me at the right answer?

Thanks
David

Tags (1)
0 Karma
1 Solution

somesoni2
Revered Legend

Give this a try

sourcetype=requests source="*STATUS*"
   | transaction client_ip_address maxspan=7m maxpause=1m keeporphans mvlist=t
   | eval client_ip_address=mvindex(client_ip_address,0)
   | eval munge=mvzip(api, process_micros, "|") | fields - api, process_micros
   | mvexpand munge
   | rex field=munge "(?<api>.+)\|(?<process_micros>.+)"
   | eval munge=client_ip_address. "|" . _time."|".duration
   | chart count sum(process_micros) AS process_micros OVER munge BY api
   | rex field=munge "(?<client_ip_address>.+)\|(?<time>\d+)\|(?<duration>\d+)"
   | table _time, client_ip_address, duration, eventcount, count*, process_micros*
   | addtotals count* fieldname=total_count | addtotals process_micros* fieldname=total_process_micros 

View solution in original post

somesoni2
Revered Legend

Give this a try

sourcetype=requests source="*STATUS*"
   | transaction client_ip_address maxspan=7m maxpause=1m keeporphans mvlist=t
   | eval client_ip_address=mvindex(client_ip_address,0)
   | eval munge=mvzip(api, process_micros, "|") | fields - api, process_micros
   | mvexpand munge
   | rex field=munge "(?<api>.+)\|(?<process_micros>.+)"
   | eval munge=client_ip_address. "|" . _time."|".duration
   | chart count sum(process_micros) AS process_micros OVER munge BY api
   | rex field=munge "(?<client_ip_address>.+)\|(?<time>\d+)\|(?<duration>\d+)"
   | table _time, client_ip_address, duration, eventcount, count*, process_micros*
   | addtotals count* fieldname=total_count | addtotals process_micros* fieldname=total_process_micros 

David_Hodgson
Engager

Perfect, apart from the rex needing "\d+\.\d+" as the match for time and duration.

Thanks

0 Karma

David_Hodgson
Engager

Minor correction. The duration match needs to be \d+(\.\d+) as singletons have a duration of 0.

0 Karma

somesoni2
Revered Legend

Can you provide you full query and current and expected output fields?

0 Karma

David_Hodgson
Engager

Information in each log record are: timestamp, client_ip_address, api, process_micros

sourcetype=requests source="*STATUS*"
  | transaction client_ip_address maxspan=7m maxpause=1m keeporphans mvlist=t
  | eval client_ip_address=mvindex(client_ip_address,0)
  | eval munge=mvzip(api, process_micros, "|") | fields - api, process_micros
  | mvexpand munge
  | eval api=replace(munge, "\|.*$", "")
  | eval process_micros=replace(munge, "^.*\|", "")
  | eval munge=client_ip_address. "|" . _time
  | chart count sum(process_micros) AS process_micros OVER munge BY api
  | eval client_ip_address=replace(munge, "\|.*$", "")
  | eval _time=replace(munge, "^.*\|", "")
  | table _time, client_ip_address, duration, eventcount, count*, process_micros*, total_count, total_process_micros

missing & required are total_count = sum(count*) for the transaction, and total_process_micros = sum(process_micros*), and duration from the transaction

0 Karma

woodcock
Esteemed Legend

I hate to be so unimaginative, but will you provide a sample of the output as it exists after this command is run?

0 Karma
Get Updates on the Splunk Community!

New in Observability - Improvements to Custom Metrics SLOs, Log Observer Connect & ...

The latest enhancements to the Splunk observability portfolio deliver improved SLO management accuracy, better ...

Improve Data Pipelines Using Splunk Data Management

  Register Now   This Tech Talk will explore the pipeline management offerings Edge Processor and Ingest ...

3-2-1 Go! How Fast Can You Debug Microservices with Observability Cloud?

Register Join this Tech Talk to learn how unique features like Service Centric Views, Tag Spotlight, and ...