<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Are there any issues with Splunk reading and indexing gzip files via a universal forwarder? in Getting Data In</title>
    <link>https://community.splunk.com/t5/Getting-Data-In/Are-there-any-issues-with-Splunk-reading-and-indexing-gzip-files/m-p/189581#M37766</link>
    <description>&lt;P&gt;That all sounds reasonable as long as it reliable here. These are daily batch files, no manageable delay is really a problem, and it's done overnight when things are relatively sleepy. Where would the files be decompressed to by default?&lt;/P&gt;

&lt;P&gt;Ultimately this is a temp hack before we get a real time stream of equivalent data, so looks good all round to me. Thanks&lt;/P&gt;</description>
    <pubDate>Thu, 19 Mar 2015 13:52:57 GMT</pubDate>
    <dc:creator>acidkewpie</dc:creator>
    <dc:date>2015-03-19T13:52:57Z</dc:date>
    <item>
      <title>Are there any issues with Splunk reading and indexing gzip files via a universal forwarder?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Are-there-any-issues-with-Splunk-reading-and-indexing-gzip-files/m-p/189578#M37763</link>
      <description>&lt;P&gt;Hi, &lt;/P&gt;

&lt;P&gt;I've heard comments against configuring Splunk to read gzipped files, horror stories of it not always noticing the file was indeed a gz and logging the compressed raw data instead. I'm looking to piggyback on an existing process that drops a pile of gzipped logs onto a server with a universal forwarder already installed, and don't want to have to delve into custom scripts to first decompress the files to a temp location if there are no genuine known concerns around Splunk's consistent reliability when it comes to indexing gzipped files..&lt;/P&gt;</description>
      <pubDate>Thu, 19 Mar 2015 12:13:58 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Are-there-any-issues-with-Splunk-reading-and-indexing-gzip-files/m-p/189578#M37763</guid>
      <dc:creator>acidkewpie</dc:creator>
      <dc:date>2015-03-19T12:13:58Z</dc:date>
    </item>
    <item>
      <title>Re: Are there any issues with Splunk reading and indexing gzip files via a universal forwarder?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Are-there-any-issues-with-Splunk-reading-and-indexing-gzip-files/m-p/189579#M37764</link>
      <description>&lt;P&gt;For my understand there is no need to decompress gzip files before indexing it.&lt;/P&gt;</description>
      <pubDate>Thu, 19 Mar 2015 13:19:42 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Are-there-any-issues-with-Splunk-reading-and-indexing-gzip-files/m-p/189579#M37764</guid>
      <dc:creator>btt</dc:creator>
      <dc:date>2015-03-19T13:19:42Z</dc:date>
    </item>
    <item>
      <title>Re: Are there any issues with Splunk reading and indexing gzip files via a universal forwarder?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Are-there-any-issues-with-Splunk-reading-and-indexing-gzip-files/m-p/189580#M37765</link>
      <description>&lt;P&gt;Splunk can read zip/gzip files. Do understand that what Splunk does on the back end is:&lt;/P&gt;

&lt;P&gt;1) Unarchives &lt;BR /&gt;
2) Reads the Files&lt;BR /&gt;
3) Indexes&lt;BR /&gt;
4) Deletes the unarchived pieces&lt;/P&gt;

&lt;P&gt;Additionally, the unzip process is not multithreaded. So you can see a fair amount of latency and cpu time used when this is done. Especially true if you are trying to monitor a large number of zip files. Also, you have to becareful regarding free disk space..&lt;/P&gt;</description>
      <pubDate>Thu, 19 Mar 2015 13:33:40 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Are-there-any-issues-with-Splunk-reading-and-indexing-gzip-files/m-p/189580#M37765</guid>
      <dc:creator>esix_splunk</dc:creator>
      <dc:date>2015-03-19T13:33:40Z</dc:date>
    </item>
    <item>
      <title>Re: Are there any issues with Splunk reading and indexing gzip files via a universal forwarder?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Are-there-any-issues-with-Splunk-reading-and-indexing-gzip-files/m-p/189581#M37766</link>
      <description>&lt;P&gt;That all sounds reasonable as long as it reliable here. These are daily batch files, no manageable delay is really a problem, and it's done overnight when things are relatively sleepy. Where would the files be decompressed to by default?&lt;/P&gt;

&lt;P&gt;Ultimately this is a temp hack before we get a real time stream of equivalent data, so looks good all round to me. Thanks&lt;/P&gt;</description>
      <pubDate>Thu, 19 Mar 2015 13:52:57 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Are-there-any-issues-with-Splunk-reading-and-indexing-gzip-files/m-p/189581#M37766</guid>
      <dc:creator>acidkewpie</dc:creator>
      <dc:date>2015-03-19T13:52:57Z</dc:date>
    </item>
  </channel>
</rss>

