<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Output XML via a custom search command in Splunk Dev</title>
    <link>https://community.splunk.com/t5/Splunk-Dev/Output-XML-via-a-custom-search-command/m-p/78626#M1118</link>
    <description>&lt;P&gt;I'm working on a custom search command which will take the results of a search and create an XML output file.  As a very simplified example, the search might look like this:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;source=a OR source=b | fields host, source, some_field | outputxml
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;Within my search command, I read the results and aggregate all of the stuff into Python dicts (e.g. source[type]['total'] += 1, source[type][value] += 1, etc), and then attempt to write the results to a randomly named output file, where the XML would look something like:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;&amp;lt;xml&amp;gt;
  &amp;lt;source type="syslog" total="2"&amp;gt;
    &amp;lt;some_field value="1" count="1"/&amp;gt;
    &amp;lt;some_field value="0" count="1"/&amp;gt;
  &amp;lt;/source&amp;gt;
  &amp;lt;source type="dhcp" total="1"&amp;gt;
    &amp;lt;some_field value="1" count="1"/&amp;gt;
  &amp;lt;source&amp;gt;
&amp;lt;/xml&amp;gt;
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;However, I suppose due to map/reduce maybe, multiple output files are created with the results being spread among them.  At least, I suppose that it would make sense to be a function of map/reduce, and actually rather cool to see in action.&lt;/P&gt;

&lt;P&gt;Is my analysis correct?  If so, what is the best practice for handling this merging of results into a single, highly structured output file where order matters?&lt;/P&gt;</description>
    <pubDate>Sun, 10 Apr 2011 23:25:57 GMT</pubDate>
    <dc:creator>mw</dc:creator>
    <dc:date>2011-04-10T23:25:57Z</dc:date>
    <item>
      <title>Output XML via a custom search command</title>
      <link>https://community.splunk.com/t5/Splunk-Dev/Output-XML-via-a-custom-search-command/m-p/78626#M1118</link>
      <description>&lt;P&gt;I'm working on a custom search command which will take the results of a search and create an XML output file.  As a very simplified example, the search might look like this:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;source=a OR source=b | fields host, source, some_field | outputxml
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;Within my search command, I read the results and aggregate all of the stuff into Python dicts (e.g. source[type]['total'] += 1, source[type][value] += 1, etc), and then attempt to write the results to a randomly named output file, where the XML would look something like:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;&amp;lt;xml&amp;gt;
  &amp;lt;source type="syslog" total="2"&amp;gt;
    &amp;lt;some_field value="1" count="1"/&amp;gt;
    &amp;lt;some_field value="0" count="1"/&amp;gt;
  &amp;lt;/source&amp;gt;
  &amp;lt;source type="dhcp" total="1"&amp;gt;
    &amp;lt;some_field value="1" count="1"/&amp;gt;
  &amp;lt;source&amp;gt;
&amp;lt;/xml&amp;gt;
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;However, I suppose due to map/reduce maybe, multiple output files are created with the results being spread among them.  At least, I suppose that it would make sense to be a function of map/reduce, and actually rather cool to see in action.&lt;/P&gt;

&lt;P&gt;Is my analysis correct?  If so, what is the best practice for handling this merging of results into a single, highly structured output file where order matters?&lt;/P&gt;</description>
      <pubDate>Sun, 10 Apr 2011 23:25:57 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Dev/Output-XML-via-a-custom-search-command/m-p/78626#M1118</guid>
      <dc:creator>mw</dc:creator>
      <dc:date>2011-04-10T23:25:57Z</dc:date>
    </item>
    <item>
      <title>Re: Output XML via a custom search command</title>
      <link>https://community.splunk.com/t5/Splunk-Dev/Output-XML-via-a-custom-search-command/m-p/78627#M1119</link>
      <description>&lt;P&gt;I would suggest that it might be easier to get what you want by calling the Splunk API:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;wget --no-check-certificate --user=admin --password=changeme -O - --post-data='search sourcetype%3Dmysourcetype | head 2&amp;amp;exec_mode=oneshot' &lt;A href="https://localhost:8089/services/search/jobs" target="test_blank"&gt;https://localhost:8089/services/search/jobs&lt;/A&gt;
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;There is also generally no need for you to worry about map-reduce. Splunk will take care of that. (It's possible to write map-reduceable search commands if you specify them as &lt;CODE&gt;streaming&lt;/CODE&gt;, but converting CSV to XML and attempting to merge them in the reduce step is not an operation that will gain from what Splunk already does with the results.) So you can just worry about convert the CSV input to XML on a single node.&lt;/P&gt;</description>
      <pubDate>Mon, 11 Apr 2011 11:41:25 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Dev/Output-XML-via-a-custom-search-command/m-p/78627#M1119</guid>
      <dc:creator>gkanapathy</dc:creator>
      <dc:date>2011-04-11T11:41:25Z</dc:date>
    </item>
  </channel>
</rss>

