Splunk Search

Combining And Analyzing Stats Across Events by Other Fields

duggym122
Loves-to-Learn

tl;dr I want to take a list of events, separately sum the fields "message_accounts" (accounts processed in the event) and "message_processing" (time it takes to process) by "transaction_id" (so, in essence, two composite values related to the transaction_id across however many chunks it was split to) so that I can bucket/bin the sum of the message_accounts by the corresponding average of the message_processing value across all of these families of events

I have messages that show sub-totals of processing time for split-off chunks of a larger message, identified by a field called "transaction_id" 

For example, our service accepts consolidated messages from another service (from 1 unit to thousands of combined message units) and splits them into chunks no larger than 100, where each chunk retains the "transaction_id" of the message source, so it's unique to the original message which we then split into more manageable pieces to be processed in parallel.

Labels (1)
0 Karma

richgalloway
SplunkTrust
SplunkTrust

This use case is not clear.  Please share some sample (sanitized) data, the SPL you've tried, the actual results, and the desired results.

---
If this reply helps you, Karma would be appreciated.
0 Karma

duggym122
Loves-to-Learn

The source system produces messages that contain a field "transaction_id" which is a uuid and each message contains data about some unknown number of accounts (this data and these accounts are not involved, and I will exclude any further discussion of them).

Our service reads messages from a producer, and is optimized to multithread the processing of these larger messages in increments of 100 accounts. So, any inbound message is "split" into blocks that each generate log messages containing three major pieces of data:

  • The source message "transaction_id" value (extracted to a field via regex called "transaction_id")
    • There will be at least one event per transaction_id, but there are often more (there can be thousands of accounts in especially large messages)
  • The number of accounts represented by the event expressed in the message body (again, extracted to a field "message_accounts" via regex)
  • How long the block of accounts took to process (again, extracted to a field "message_processing" via regex)

I can get this working  and it gives me table like the following:

Side note - the important commands are:

| bin message_accounts span=30
| stats avg(message_processing) by message_accounts
message_accountsavg(message_processing)
0-30 184
30-60966
60-901610
90-1202096

 

However, because we split any large messages down to 100, and there's no function currently aggregating these message-level stats by their shared "transaction_id" values, the chart only analyzes these chunks and, instead, I want to combine the stats to consider the sum of all "message_accounts" and "message_processing" values across events that share a common "transaction_id" to reconstruct the total accounts. 

0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...