<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Spunk forwarder scalability in Knowledge Management</title>
    <link>https://community.splunk.com/t5/Knowledge-Management/Spunk-forwarder-scalability/m-p/202746#M6908</link>
    <description>&lt;P&gt;I'm considering usage of splunk-forwarder to integrate a system that generates many small files that contain log messages, i.e. at times more than a thousand per second. Once the files reach splunk they can be deleted.&lt;/P&gt;

&lt;P&gt;I wonder how will the forwarded handle this situation. I've read that it can monitor well about a 100 files. Should I implement other jobs to move the processed files and how should I know if a file is processed?&lt;/P&gt;

&lt;P&gt;The other approach that I could take is to change the system to log a rotating file. So which one do you think is better?&lt;/P&gt;</description>
    <pubDate>Wed, 21 Sep 2016 07:33:45 GMT</pubDate>
    <dc:creator>dimitarvalov</dc:creator>
    <dc:date>2016-09-21T07:33:45Z</dc:date>
    <item>
      <title>Spunk forwarder scalability</title>
      <link>https://community.splunk.com/t5/Knowledge-Management/Spunk-forwarder-scalability/m-p/202746#M6908</link>
      <description>&lt;P&gt;I'm considering usage of splunk-forwarder to integrate a system that generates many small files that contain log messages, i.e. at times more than a thousand per second. Once the files reach splunk they can be deleted.&lt;/P&gt;

&lt;P&gt;I wonder how will the forwarded handle this situation. I've read that it can monitor well about a 100 files. Should I implement other jobs to move the processed files and how should I know if a file is processed?&lt;/P&gt;

&lt;P&gt;The other approach that I could take is to change the system to log a rotating file. So which one do you think is better?&lt;/P&gt;</description>
      <pubDate>Wed, 21 Sep 2016 07:33:45 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Knowledge-Management/Spunk-forwarder-scalability/m-p/202746#M6908</guid>
      <dc:creator>dimitarvalov</dc:creator>
      <dc:date>2016-09-21T07:33:45Z</dc:date>
    </item>
    <item>
      <title>Re: Spunk forwarder scalability</title>
      <link>https://community.splunk.com/t5/Knowledge-Management/Spunk-forwarder-scalability/m-p/202747#M6909</link>
      <description>&lt;P&gt;I would say prefer to log to a rotating file. I have experience from Splunking an application that was producing several thousand files an hour and it was not pretty. Monitoring that amount of files will result in a performance hit which is disproportionate to the amount of data to be ingested.&lt;/P&gt;

&lt;P&gt;Therefore: prefer fewer log files.&lt;/P&gt;</description>
      <pubDate>Wed, 21 Sep 2016 13:37:45 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Knowledge-Management/Spunk-forwarder-scalability/m-p/202747#M6909</guid>
      <dc:creator>echalex</dc:creator>
      <dc:date>2016-09-21T13:37:45Z</dc:date>
    </item>
  </channel>
</rss>

