I have splunk poll a database and return the results into a
transaction command. The transaction command groups the rows by key (which is whited-out, below) with
maxevents=2. However, the results of that
transaction appear disjointed. Below, the column containing the values 0.886, 0.88695, etc. is a historic column showing the historic values of the field. So at one point, field 'rate' has value '0.92695' but it was deleted and the value '0.9314' was inserted instead.
As you can see, the values don't appear logically - the 3rd row from bottom breaks the 'previous value top, new value bottom'. I want this ordering to be consistent on the whole column, so the user knows that the previous value will always be on the top. How do I do this?
I think what's going on is that splunk is taking the values of the transaction, combining them together and then ordering each column in ascending order individually and breaking the row mappings. I want it so that the rows keep their mappings.
Hi, the solution works fine but is it possible for the 'VERSION' column to be sorted ascendingly (i.e. bigger value always on top) and the rows to be reflected as such? Currently the row mappings match but the ordering is still quite random
I'm sorry, I don't think this is possible. Once the events are bundled into a transaction, your ability to order them as individual rows effectively disappears.
So your solution might be instead to order your events by
VERSION first, and then apply the transaction command to them.
Again, according to the doc, you can pass a list of fields instead of t/f values. So in your case, maybe something like:
(bool) in the
mvlist parameter list and didnt bother reading the rest of the line...whoops!
I created a macro which combines the rows if they're the same in the meantime. Interestingly, it makes the search take 0.8 seconds (5.9->5.1) quicker manually combining each row!