<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: How to monitor multiple unrelated directories in Getting Data In</title>
    <link>https://community.splunk.com/t5/Getting-Data-In/How-to-monitor-multiple-unrelated-directories/m-p/460150#M79444</link>
    <description>&lt;P&gt;Hi michaellightfoot,&lt;BR /&gt;
the best approach to monitor many different folders is to plan the ingestion before to start, in other words: use an Excel file to define monitoring perimeter: all the hosts to monitor and, for each server, the folders to ingest.&lt;BR /&gt;
In this way you can create your own inputs.conf that permits to ingest all the logs you want.&lt;/P&gt;

&lt;P&gt;To debug eventual not ingested logs you have to map the monitoring perimeter with the logs you're receiving so you can define which are the missing folders.&lt;BR /&gt;
At this point you have to see one by one each folder to understand if there are logs to ingest or not.&lt;BR /&gt;
In this way you can limit the folders to check.&lt;/P&gt;

&lt;P&gt;At first check file permissions: maybe the user you're using to run Splunk Universal Forwarder hasn't read grants on that folder or files.&lt;BR /&gt;
Then check the time format of your logs: maybe the logs have time format dd/mm/yyyy and by default Splunk uses the time format mm/dd/yyyy, so you ingested logs, but instead to have as timestamp 3rd of february they have 2nd of march.&lt;/P&gt;

&lt;P&gt;Lety me know if in this way you solved.&lt;/P&gt;

&lt;P&gt;Ciao.&lt;BR /&gt;
Giuseppe&lt;/P&gt;</description>
    <pubDate>Mon, 03 Feb 2020 08:10:09 GMT</pubDate>
    <dc:creator>gcusello</dc:creator>
    <dc:date>2020-02-03T08:10:09Z</dc:date>
    <item>
      <title>How to monitor multiple unrelated directories</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/How-to-monitor-multiple-unrelated-directories/m-p/460149#M79443</link>
      <description>&lt;P&gt;Using the universal forwarder I need to monitor multiple directories in separate parts of the filesystem.&lt;/P&gt;

&lt;P&gt;Specifically (obfuscated so as not to identify our customer):&lt;BR /&gt;
[monitor:///var/log]&lt;BR /&gt;
[monitor:///home//logs]&lt;/P&gt;

&lt;P&gt;It seems that multiple monitor stanzas are not working (at least our customer is reporting that the second monitor stanza is not forwarding any files to their Splunk instance.&lt;/P&gt;

&lt;P&gt;Is there a workable solution?&lt;/P&gt;</description>
      <pubDate>Mon, 03 Feb 2020 04:49:02 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/How-to-monitor-multiple-unrelated-directories/m-p/460149#M79443</guid>
      <dc:creator>michaellightfoo</dc:creator>
      <dc:date>2020-02-03T04:49:02Z</dc:date>
    </item>
    <item>
      <title>Re: How to monitor multiple unrelated directories</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/How-to-monitor-multiple-unrelated-directories/m-p/460150#M79444</link>
      <description>&lt;P&gt;Hi michaellightfoot,&lt;BR /&gt;
the best approach to monitor many different folders is to plan the ingestion before to start, in other words: use an Excel file to define monitoring perimeter: all the hosts to monitor and, for each server, the folders to ingest.&lt;BR /&gt;
In this way you can create your own inputs.conf that permits to ingest all the logs you want.&lt;/P&gt;

&lt;P&gt;To debug eventual not ingested logs you have to map the monitoring perimeter with the logs you're receiving so you can define which are the missing folders.&lt;BR /&gt;
At this point you have to see one by one each folder to understand if there are logs to ingest or not.&lt;BR /&gt;
In this way you can limit the folders to check.&lt;/P&gt;

&lt;P&gt;At first check file permissions: maybe the user you're using to run Splunk Universal Forwarder hasn't read grants on that folder or files.&lt;BR /&gt;
Then check the time format of your logs: maybe the logs have time format dd/mm/yyyy and by default Splunk uses the time format mm/dd/yyyy, so you ingested logs, but instead to have as timestamp 3rd of february they have 2nd of march.&lt;/P&gt;

&lt;P&gt;Lety me know if in this way you solved.&lt;/P&gt;

&lt;P&gt;Ciao.&lt;BR /&gt;
Giuseppe&lt;/P&gt;</description>
      <pubDate>Mon, 03 Feb 2020 08:10:09 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/How-to-monitor-multiple-unrelated-directories/m-p/460150#M79444</guid>
      <dc:creator>gcusello</dc:creator>
      <dc:date>2020-02-03T08:10:09Z</dc:date>
    </item>
    <item>
      <title>Re: How to monitor multiple unrelated directories</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/How-to-monitor-multiple-unrelated-directories/m-p/460151#M79445</link>
      <description>&lt;P&gt;I suspect that the problem is at the customer splunk end as I have run a tcpdump and can see the data from both monitors being sent to their instance.  Unfortunately I do not have access to that splunk instance so I cannot verify anything.&lt;/P&gt;

&lt;P&gt;I will mention that they might need to check that they are ingesting the timestamps correctly.&lt;/P&gt;</description>
      <pubDate>Wed, 05 Feb 2020 23:22:24 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/How-to-monitor-multiple-unrelated-directories/m-p/460151#M79445</guid>
      <dc:creator>michaellightfoo</dc:creator>
      <dc:date>2020-02-05T23:22:24Z</dc:date>
    </item>
  </channel>
</rss>

