I'm guessing now that the key I was missing there exactly was the transaction. I'm trying it now. Unfortunately, it seems to really slow down the searches. I'm waiting now for a search to come back (after configuring the transactiontypes.conf). The "scanned events" is increasing much slower with this transaction as part of the search. Without understanding much of the underlying database structure or how a search operates on a mechanical level with regards to I/O ops, etc, I can sense that this would be more expensive to do this collation.
I'd restart the search with a narrower time window except I seem to be in some kind of license Apollo13 right now. This search is running but I can't start a new one due to license violation... I need to re-up my trial I guess or go FREE maybe. In any case, I'll check up with in the morning and let you know how it goes.
I guess my next puzzle to try to solve once I get this rig working is how to distribute this kind of work.
Something that has been helpful to me when dealing with DS logs in Splunk is defining transactions so I can search on something like a search filter, error code, or etime and then get all the operation + result pairings that match grouped together, or get all the transactions for that connection.