In theory you could paint a little row index number on all your transaction rows to mark them, then split the whole beast apart into individual events again, filter them out as you like, then use stats to piece them all back together using that row index number.
Like so. You would tack something like this on the end:
... | streamstats count as transactionRowIndex | eval _raw=split(_raw,"\n") | mvexpand _raw | search foo!="BAR" | stats list(_raw) as _raw values(*) as * by transactionRowIndex | eval _raw=mvjoin(_raw,",") | sort transactionRowIndex
BONUS:
If you have any huge transactions with tons of rows they might get truncated when they pass through the needle as multivalue fields. You could check that by using | eval old_eventcount=eventcount after the transaction but before the manipulation, and then compare this later with the mvcount of _raw just before it's re-joined back into a giant string...
ie this search should return zero results, but if it does, the rows returned will be the ones whose text and fields are getting truncated a bit by the search.
... | streamstats count as transactionRowIndex | eval old_eventcount=eventcount | eval _raw=split(_raw,"\n") | mvexpand _raw | search foo!="BAR" | stats list(_raw) as _raw values(*) as * by transactionRowIndex | eval new_eventcount=mvcount(_raw) | eval _raw=mvjoin(_raw,",") | sort transactionRowIndex | where new_eventcount!=old_eventcount
UPDATE: Of course, another option is to use the rex command in sed mode to just strip out parts of the transaction without blowing it apart...
See "mode=sed" in this page: http://docs.splunk.com/Documentation/Splunk/5.0.3/SearchReference/Rex
... View more