Splunk Search

totals for a transaction

David_Hodgson
Engager

I have a system with customers interacting with a catalogue, stepping through the menus, searching etc. I can chunk these into transactions using user ID & time period (max 7 mins, max pause 1 min).

I've got it as far as combining the numbers into a count of each type of system request by transaction, and the server resources used by system request type, all on a single line (using transaction with mvlist=t, then mvzip -> mvexpand as explained link:here(http://docs.splunk.com/Documentation/Splunk/6.3.3/SearchReference/Mvexpand "mvexpand"), then chart over _time by system request type).

I'm stuck on 2 final steps:
- how to preserve one and only one copy of the duration & eventcount from the transaction into the final row
- how to create total_count & total_process_time from the count* & process_time* fields across the row

Can anyone point me at the right answer?

Thanks
David

Tags (1)
0 Karma
1 Solution

somesoni2
Revered Legend

Give this a try

sourcetype=requests source="*STATUS*"
   | transaction client_ip_address maxspan=7m maxpause=1m keeporphans mvlist=t
   | eval client_ip_address=mvindex(client_ip_address,0)
   | eval munge=mvzip(api, process_micros, "|") | fields - api, process_micros
   | mvexpand munge
   | rex field=munge "(?<api>.+)\|(?<process_micros>.+)"
   | eval munge=client_ip_address. "|" . _time."|".duration
   | chart count sum(process_micros) AS process_micros OVER munge BY api
   | rex field=munge "(?<client_ip_address>.+)\|(?<time>\d+)\|(?<duration>\d+)"
   | table _time, client_ip_address, duration, eventcount, count*, process_micros*
   | addtotals count* fieldname=total_count | addtotals process_micros* fieldname=total_process_micros 

View solution in original post

somesoni2
Revered Legend

Give this a try

sourcetype=requests source="*STATUS*"
   | transaction client_ip_address maxspan=7m maxpause=1m keeporphans mvlist=t
   | eval client_ip_address=mvindex(client_ip_address,0)
   | eval munge=mvzip(api, process_micros, "|") | fields - api, process_micros
   | mvexpand munge
   | rex field=munge "(?<api>.+)\|(?<process_micros>.+)"
   | eval munge=client_ip_address. "|" . _time."|".duration
   | chart count sum(process_micros) AS process_micros OVER munge BY api
   | rex field=munge "(?<client_ip_address>.+)\|(?<time>\d+)\|(?<duration>\d+)"
   | table _time, client_ip_address, duration, eventcount, count*, process_micros*
   | addtotals count* fieldname=total_count | addtotals process_micros* fieldname=total_process_micros 

David_Hodgson
Engager

Perfect, apart from the rex needing "\d+\.\d+" as the match for time and duration.

Thanks

0 Karma

David_Hodgson
Engager

Minor correction. The duration match needs to be \d+(\.\d+) as singletons have a duration of 0.

0 Karma

somesoni2
Revered Legend

Can you provide you full query and current and expected output fields?

0 Karma

David_Hodgson
Engager

Information in each log record are: timestamp, client_ip_address, api, process_micros

sourcetype=requests source="*STATUS*"
  | transaction client_ip_address maxspan=7m maxpause=1m keeporphans mvlist=t
  | eval client_ip_address=mvindex(client_ip_address,0)
  | eval munge=mvzip(api, process_micros, "|") | fields - api, process_micros
  | mvexpand munge
  | eval api=replace(munge, "\|.*$", "")
  | eval process_micros=replace(munge, "^.*\|", "")
  | eval munge=client_ip_address. "|" . _time
  | chart count sum(process_micros) AS process_micros OVER munge BY api
  | eval client_ip_address=replace(munge, "\|.*$", "")
  | eval _time=replace(munge, "^.*\|", "")
  | table _time, client_ip_address, duration, eventcount, count*, process_micros*, total_count, total_process_micros

missing & required are total_count = sum(count*) for the transaction, and total_process_micros = sum(process_micros*), and duration from the transaction

0 Karma

woodcock
Esteemed Legend

I hate to be so unimaginative, but will you provide a sample of the output as it exists after this command is run?

0 Karma
Get Updates on the Splunk Community!

Monitoring Postgres with OpenTelemetry

Behind every business-critical application, you’ll find databases. These behind-the-scenes stores power ...

Mastering Synthetic Browser Testing: Pro Tips to Keep Your Web App Running Smoothly

To start, if you're new to synthetic monitoring, I recommend exploring this synthetic monitoring overview. In ...

Splunk Edge Processor | Popular Use Cases to Get Started with Edge Processor

Splunk Edge Processor offers more efficient, flexible data transformation – helping you reduce noise, control ...