<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Issue Forwarding Cloud Custodian Logs Using a Splunk Universal Forwarder in Deployment Architecture</title>
    <link>https://community.splunk.com/t5/Deployment-Architecture/Issue-Forwarding-Cloud-Custodian-Logs-Using-a-Splunk-Universal/m-p/295383#M19517</link>
    <description>&lt;P&gt;I tried what has been suggested but I still have two issues. The first is that I am getting each line of a JSON file a an individual log event. The second is that I am not getting most of the files. I suspect the UF thinks it has indexed the files already. That is why I had been trying things like "crcsalt = " and initCrcLength = 1048576. Is there anything else you would suggest I try?&lt;/P&gt;</description>
    <pubDate>Tue, 28 Mar 2017 11:25:42 GMT</pubDate>
    <dc:creator>gerrykahn</dc:creator>
    <dc:date>2017-03-28T11:25:42Z</dc:date>
    <item>
      <title>Issue Forwarding Cloud Custodian Logs Using a Splunk Universal Forwarder</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/Issue-Forwarding-Cloud-Custodian-Logs-Using-a-Splunk-Universal/m-p/295380#M19514</link>
      <description>&lt;P&gt;I have a developer running Cloud Custodian scans in AWS and dropping the JSON results on a Linux box running a Splunk Universal Forwarder. The results go into a file hierarchy: Out/BU_name/ORG_name/TypeOfScan_name/Results &lt;/P&gt;

&lt;P&gt;I installed a Splunk UF on the box and set it up monitor the Out directory and all the sub-directories.&lt;/P&gt;

&lt;P&gt;The problem is because there are many BUs, each with several ORGs and all running 5 different types of scans I end up with several hundred files with exactly the same name in hundreds of sub-directories. And to make matters worse the scan reruns every 10 minutes and the output file goes in the same location and has the same name, just the time stamp is updated.&lt;BR /&gt;
I have tried many configuration and none have worked. &lt;/P&gt;

&lt;P&gt;My latest attempted inputs.conf:&lt;BR /&gt;
[monitor:///home/cloud-user/out/]&lt;BR /&gt;
disabled = false&lt;BR /&gt;
index = aws_scan&lt;BR /&gt;
sourcetype = cloudcustodian&lt;BR /&gt;
recursive = true&lt;BR /&gt;
crcSalt = &lt;BR /&gt;
initCrcLength = 1048576&lt;/P&gt;

&lt;P&gt;Has anyone faced a similar issue and found a solution?&lt;/P&gt;</description>
      <pubDate>Tue, 29 Sep 2020 13:24:33 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/Issue-Forwarding-Cloud-Custodian-Logs-Using-a-Splunk-Universal/m-p/295380#M19514</guid>
      <dc:creator>gerrykahn</dc:creator>
      <dc:date>2020-09-29T13:24:33Z</dc:date>
    </item>
    <item>
      <title>Re: Issue Forwarding Cloud Custodian Logs Using a Splunk Universal Forwarder</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/Issue-Forwarding-Cloud-Custodian-Logs-Using-a-Splunk-Universal/m-p/295381#M19515</link>
      <description>&lt;P&gt;You can have Splunk recurse through directories by using "..." in the stanza. e.g.:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;[monitor:///home/cloud-user/out/.../nameoflogfile.log]
disabled = false
index = aws_scan
sourcetype = cloudcustodian
recursive = true
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;Make sure that whatever use splunkd is running as has permissions to those files. You might want to take a look at how many file descriptors are in use as well and ensure that there are enough configured to monitor all of those files.&lt;/P&gt;</description>
      <pubDate>Tue, 28 Mar 2017 00:01:47 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/Issue-Forwarding-Cloud-Custodian-Logs-Using-a-Splunk-Universal/m-p/295381#M19515</guid>
      <dc:creator>masonmorales</dc:creator>
      <dc:date>2017-03-28T00:01:47Z</dc:date>
    </item>
    <item>
      <title>Re: Issue Forwarding Cloud Custodian Logs Using a Splunk Universal Forwarder</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/Issue-Forwarding-Cloud-Custodian-Logs-Using-a-Splunk-Universal/m-p/295382#M19516</link>
      <description>&lt;P&gt;BTW, why do you have initCrcLength set? Do the files have very long headers?&lt;/P&gt;</description>
      <pubDate>Tue, 28 Mar 2017 00:03:03 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/Issue-Forwarding-Cloud-Custodian-Logs-Using-a-Splunk-Universal/m-p/295382#M19516</guid>
      <dc:creator>masonmorales</dc:creator>
      <dc:date>2017-03-28T00:03:03Z</dc:date>
    </item>
    <item>
      <title>Re: Issue Forwarding Cloud Custodian Logs Using a Splunk Universal Forwarder</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/Issue-Forwarding-Cloud-Custodian-Logs-Using-a-Splunk-Universal/m-p/295383#M19517</link>
      <description>&lt;P&gt;I tried what has been suggested but I still have two issues. The first is that I am getting each line of a JSON file a an individual log event. The second is that I am not getting most of the files. I suspect the UF thinks it has indexed the files already. That is why I had been trying things like "crcsalt = " and initCrcLength = 1048576. Is there anything else you would suggest I try?&lt;/P&gt;</description>
      <pubDate>Tue, 28 Mar 2017 11:25:42 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/Issue-Forwarding-Cloud-Custodian-Logs-Using-a-Splunk-Universal/m-p/295383#M19517</guid>
      <dc:creator>gerrykahn</dc:creator>
      <dc:date>2017-03-28T11:25:42Z</dc:date>
    </item>
    <item>
      <title>Re: Issue Forwarding Cloud Custodian Logs Using a Splunk Universal Forwarder</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/Issue-Forwarding-Cloud-Custodian-Logs-Using-a-Splunk-Universal/m-p/295384#M19518</link>
      <description>&lt;P&gt;Some reference material:&lt;/P&gt;

&lt;P&gt;&lt;A href="https://docs.splunk.com/Documentation/Splunk/latest/Admin/Inputsconf"&gt;https://docs.splunk.com/Documentation/Splunk/latest/Admin/Inputsconf&lt;/A&gt;&lt;BR /&gt;
&lt;A href="http://docs.splunk.com/Documentation/Splunk/latest/Data/Monitorfilesanddirectorieswithinputs.conf"&gt;http://docs.splunk.com/Documentation/Splunk/latest/Data/Monitorfilesanddirectorieswithinputs.conf&lt;/A&gt;&lt;BR /&gt;
&lt;A href="http://docs.splunk.com/Documentation/Splunk/latest/Data/Specifyinputpathswithwildcards"&gt;http://docs.splunk.com/Documentation/Splunk/latest/Data/Specifyinputpathswithwildcards&lt;/A&gt;&lt;/P&gt;

&lt;P&gt;The last link explores how wildcards are used for recursion.&lt;/P&gt;</description>
      <pubDate>Tue, 28 Mar 2017 12:57:18 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/Issue-Forwarding-Cloud-Custodian-Logs-Using-a-Splunk-Universal/m-p/295384#M19518</guid>
      <dc:creator>sloshburch</dc:creator>
      <dc:date>2017-03-28T12:57:18Z</dc:date>
    </item>
    <item>
      <title>Re: Issue Forwarding Cloud Custodian Logs Using a Splunk Universal Forwarder</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/Issue-Forwarding-Cloud-Custodian-Logs-Using-a-Splunk-Universal/m-p/295385#M19519</link>
      <description>&lt;P&gt;Would you elaborate on this:&lt;/P&gt;

&lt;BLOCKQUOTE&gt;
&lt;P&gt;I end up with several hundred files with exactly the same name in hundreds of sub-directories. And to make matters worse the scan reruns every 10 minutes and the output file goes in the same location and has the same name, just the time stamp is updated.&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;

&lt;P&gt;It could mean a few different things. Do the files show up in splunk with the same value for source or just the filename part of the source is the same? What specifically is "exactly the same name" mean?&lt;BR /&gt;
Is the scan that runs every 10min a process that produces the outputs in these locations? What happens to the files that were there already after the scan is run? Does the scan append or replace or roll the existing logs? Sounds like it replaces the file in which case you've got a case of a log file whose cursor is at a point in the file that no longer exists because Splunk didn't realize the file is actually new (it assumes the file is appended to).&lt;/P&gt;

&lt;P&gt;Clarify those and we'll see where to go next.&lt;/P&gt;</description>
      <pubDate>Tue, 28 Mar 2017 13:00:33 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/Issue-Forwarding-Cloud-Custodian-Logs-Using-a-Splunk-Universal/m-p/295385#M19519</guid>
      <dc:creator>sloshburch</dc:creator>
      <dc:date>2017-03-28T13:00:33Z</dc:date>
    </item>
  </channel>
</rss>

