I'm working on a custom search command which will take the results of a search and create an XML output file. As a very simplified example, the search might look like this:
source=a OR source=b | fields host, source, some_field | outputxml
Within my search command, I read the results and aggregate all of the stuff into Python dicts (e.g. source[type]['total'] += 1, source[type][value] += 1, etc), and then attempt to write the results to a randomly named output file, where the XML would look something like:
However, I suppose due to map/reduce maybe, multiple output files are created with the results being spread among them. At least, I suppose that it would make sense to be a function of map/reduce, and actually rather cool to see in action.
Is my analysis correct? If so, what is the best practice for handling this merging of results into a single, highly structured output file where order matters?
There is also generally no need for you to worry about map-reduce. Splunk will take care of that. (It's possible to write map-reduceable search commands if you specify them as streaming, but converting CSV to XML and attempting to merge them in the reduce step is not an operation that will gain from what Splunk already does with the results.) So you can just worry about convert the CSV input to XML on a single node.