The source system produces messages that contain a field "transaction_id" which is a uuid and each message contains data about some unknown number of accounts (this data and these accounts are not involved, and I will exclude any further discussion of them). Our service reads messages from a producer, and is optimized to multithread the processing of these larger messages in increments of 100 accounts. So, any inbound message is "split" into blocks that each generate log messages containing three major pieces of data: The source message "transaction_id" value (extracted to a field via regex called "transaction_id") There will be at least one event per transaction_id, but there are often more (there can be thousands of accounts in especially large messages) The number of accounts represented by the event expressed in the message body (again, extracted to a field "message_accounts" via regex) How long the block of accounts took to process (again, extracted to a field "message_processing" via regex) I can get this working and it gives me table like the following: Side note - the important commands are: | bin message_accounts span=30
| stats avg(message_processing) by message_accounts message_accounts avg(message_processing) 0-30 184 30-60 966 60-90 1610 90-120 2096 However, because we split any large messages down to 100, and there's no function currently aggregating these message-level stats by their shared "transaction_id" values, the chart only analyzes these chunks and, instead, I want to combine the stats to consider the sum of all "message_accounts" and "message_processing" values across events that share a common "transaction_id" to reconstruct the total accounts.
... View more