<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Batching gzipped files residing in 4 directories into Splunk, is there a way to run parallel batches on a Splunk 6.2.6 Linux universal forwarder? in Getting Data In</title>
    <link>https://community.splunk.com/t5/Getting-Data-In/Batching-gzipped-files-residing-in-4-directories-into-Splunk-is/m-p/202000#M39980</link>
    <description>&lt;P&gt;I am batching gzipped files into Splunk. The files reside in 4 directories. Splunk, per splunkd.log, appears to be reading only the files in the first batch statement. Is there a way to run parallel batches? &lt;/P&gt;

&lt;P&gt;I have a Linux 64 bit Universal Forwarder with Splunk 6.2.6. I have set maxKBPS to 0 in limits.conf, and I have reniced the Splunk UF prdata2sses to a priority of -20 on the Linux VM. &lt;/P&gt;

&lt;P&gt;I have batch statements listed as follows:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;[batch:///leroylogs2/multicast/archive/data11/*PROD*.gz]
[batch:///leroylogs2/multicast/archive/data12/*PROD*.gz]
[batch:///leroylogs2/multicast/archive/data21/*PROD*.gz]
[batch:///leroylogs2/multicast/archive/data22/*PROD*.gz]
[batch:///leroylogs2/multicast/archive/data11/*CAS*.gz]
[batch:///leroylogs2/multicast/archive/data12/*CAS*.gz]
[batch:///leroylogs2/multicast/archive/data21/*CAS*.gz]
[batch:///leroylogs2/multicast/archive/data22/*CAS*.gz]

10-28-2015 09:37:37.372 -0700 INFO  ArchiveProcessor - Finished processing file '/leroylogs2/multicast/archive/data11/2015-08-29-08_30-PRODtrans_svs.log.gz', removing from stats
10-28-2015 09:37:37.433 -0700 INFO  ArchiveProcessor - handling file=/leroylogs2/multicast/archive/data11/2015-08-29-07_10-CAStrans_svs.log.gz
10-28-2015 09:37:37.434 -0700 INFO  ArchiveProcessor - reading path=/leroylogs2/multicast/archive/data11/2015-08-29-07_10-CAStrans_svs.log.gz (seek=0 len=32496625)
10-28-2015 09:37:37.551 -0700 WARN  TcpOutputProc - The event is missing source information. Event :
10-28-2015 09:37:38.655 -0700 ERROR ArchiveContext - From archive='/leroylogs2/multicast/archive/data11/2015-08-29-07_10-CAStrans_svs.log.gz':  gzip: stdout: Broken pipe
&lt;/CODE&gt;&lt;/PRE&gt;</description>
    <pubDate>Wed, 28 Oct 2015 16:39:42 GMT</pubDate>
    <dc:creator>lisaac</dc:creator>
    <dc:date>2015-10-28T16:39:42Z</dc:date>
    <item>
      <title>Batching gzipped files residing in 4 directories into Splunk, is there a way to run parallel batches on a Splunk 6.2.6 Linux universal forwarder?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Batching-gzipped-files-residing-in-4-directories-into-Splunk-is/m-p/202000#M39980</link>
      <description>&lt;P&gt;I am batching gzipped files into Splunk. The files reside in 4 directories. Splunk, per splunkd.log, appears to be reading only the files in the first batch statement. Is there a way to run parallel batches? &lt;/P&gt;

&lt;P&gt;I have a Linux 64 bit Universal Forwarder with Splunk 6.2.6. I have set maxKBPS to 0 in limits.conf, and I have reniced the Splunk UF prdata2sses to a priority of -20 on the Linux VM. &lt;/P&gt;

&lt;P&gt;I have batch statements listed as follows:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;[batch:///leroylogs2/multicast/archive/data11/*PROD*.gz]
[batch:///leroylogs2/multicast/archive/data12/*PROD*.gz]
[batch:///leroylogs2/multicast/archive/data21/*PROD*.gz]
[batch:///leroylogs2/multicast/archive/data22/*PROD*.gz]
[batch:///leroylogs2/multicast/archive/data11/*CAS*.gz]
[batch:///leroylogs2/multicast/archive/data12/*CAS*.gz]
[batch:///leroylogs2/multicast/archive/data21/*CAS*.gz]
[batch:///leroylogs2/multicast/archive/data22/*CAS*.gz]

10-28-2015 09:37:37.372 -0700 INFO  ArchiveProcessor - Finished processing file '/leroylogs2/multicast/archive/data11/2015-08-29-08_30-PRODtrans_svs.log.gz', removing from stats
10-28-2015 09:37:37.433 -0700 INFO  ArchiveProcessor - handling file=/leroylogs2/multicast/archive/data11/2015-08-29-07_10-CAStrans_svs.log.gz
10-28-2015 09:37:37.434 -0700 INFO  ArchiveProcessor - reading path=/leroylogs2/multicast/archive/data11/2015-08-29-07_10-CAStrans_svs.log.gz (seek=0 len=32496625)
10-28-2015 09:37:37.551 -0700 WARN  TcpOutputProc - The event is missing source information. Event :
10-28-2015 09:37:38.655 -0700 ERROR ArchiveContext - From archive='/leroylogs2/multicast/archive/data11/2015-08-29-07_10-CAStrans_svs.log.gz':  gzip: stdout: Broken pipe
&lt;/CODE&gt;&lt;/PRE&gt;</description>
      <pubDate>Wed, 28 Oct 2015 16:39:42 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Batching-gzipped-files-residing-in-4-directories-into-Splunk-is/m-p/202000#M39980</guid>
      <dc:creator>lisaac</dc:creator>
      <dc:date>2015-10-28T16:39:42Z</dc:date>
    </item>
    <item>
      <title>Re: Batching gzipped files residing in 4 directories into Splunk, is there a way to run parallel batches on a Splunk 6.2.6 Linux universal forwarder?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Batching-gzipped-files-residing-in-4-directories-into-Splunk-is/m-p/202001#M39981</link>
      <description>&lt;P&gt;I suppose that I could run 2 UFs on the same host, but I would prefer to skip this approach. &lt;/P&gt;</description>
      <pubDate>Wed, 28 Oct 2015 17:30:49 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Batching-gzipped-files-residing-in-4-directories-into-Splunk-is/m-p/202001#M39981</guid>
      <dc:creator>lisaac</dc:creator>
      <dc:date>2015-10-28T17:30:49Z</dc:date>
    </item>
    <item>
      <title>Re: Batching gzipped files residing in 4 directories into Splunk, is there a way to run parallel batches on a Splunk 6.2.6 Linux universal forwarder?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Batching-gzipped-files-residing-in-4-directories-into-Splunk-is/m-p/202002#M39982</link>
      <description>&lt;P&gt;In Pre 6.3: The only way to read archives/files in parallel is by spawning multiple instances of splunk on the forwarder. &lt;BR /&gt;
Splunk 6.3 release has a new feature where you can spawn multiple ingestion pipelines, where each pipeline can read one archive/file independently. So essentially with multiple ingestion pipelines, splunk will read multiple archives/files in parallel.&lt;BR /&gt;
Documentation: &lt;A href="http://docs.splunk.com/Documentation/Splunk/6.3.0/Indexer/Pipelinesets"&gt;http://docs.splunk.com/Documentation/Splunk/6.3.0/Indexer/Pipelinesets&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 30 Oct 2015 18:47:43 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Batching-gzipped-files-residing-in-4-directories-into-Splunk-is/m-p/202002#M39982</guid>
      <dc:creator>anekkanti_splun</dc:creator>
      <dc:date>2015-10-30T18:47:43Z</dc:date>
    </item>
  </channel>
</rss>

