<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Sending logs over scp to heavy forwarder, why does splunk mangle, improperly break some of the events? in All Apps and Add-ons</title>
    <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/Sending-logs-over-scp-to-heavy-forwarder-why-does-splunk-mangle/m-p/338115#M40677</link>
    <description>&lt;P&gt;Thanks for the suggestion, this also doesn't work (still breaks and ends randomly). &lt;/P&gt;

&lt;P&gt;Thinking this is a bug with monitoring feature on the heavy forwarder, but I'm open to more suggestions. &lt;/P&gt;

&lt;P&gt;A side note, elasticsearch and filebeat sending the events from the same box don't have this problem at all; seems indicative that this is a splunk bug. Either way, once I find something that works through support, community, or just banging my head against the splunk wall until it works, I'll post it here. &lt;/P&gt;</description>
    <pubDate>Wed, 07 Jun 2017 15:01:57 GMT</pubDate>
    <dc:creator>JSkier</dc:creator>
    <dc:date>2017-06-07T15:01:57Z</dc:date>
    <item>
      <title>Sending logs over scp to heavy forwarder, why does splunk mangle, improperly break some of the events?</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/Sending-logs-over-scp-to-heavy-forwarder-why-does-splunk-mangle/m-p/338106#M40668</link>
      <description>&lt;P&gt;I'm using heavy forwarder to take logs in from a cloud ESA appliance. The logs sending over every 5 minutes via scp (deleting old files every 2 hours after modtime stops) work fine, line by line and time stamps all good. For some reason splunk is randomly ingesting some events by grabbing from a random place in the file, giving it a time stamp, and calling it an event. &lt;/P&gt;

&lt;P&gt;For example, I most commonly get 1 - 3 characters, like ile, or id. Sometimes I get the middle to the end of an event. I don't understand why it's doing this, there is no line breaking (it's disabled) and I have enabled crcsalt by source. &lt;/P&gt;

&lt;P&gt;I'm using splunk 6.4.6 on the indexers, and 6.6.1 on the heavy forwarder (started with 6.4.6). If I upload the file to my dev box, it's fine; for some reason the monitor feature of splunk is having issues. I also ingest WSA (but with universal forwarder) over scp, and I don't have these issues. &lt;/P&gt;

&lt;P&gt;I have put the ESA app on the heavy forwader and search heads. Also tried just indexers and search heads, only input on heavy forwarder. None this changed anything. &lt;/P&gt;

&lt;P&gt;This is all virtualized Linux, Ubuntu 64bit LTS servers. &lt;/P&gt;

&lt;P&gt;props.conf:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;CHARSET = utf-8
MAX_TIMESTAMP_LOOKAHEAD = 24
NO_BINARY_CHECK = true
SHOULD_LINEMERGE = false
TRUNCATE = 250000
TIME_PREFIX = \w{3}\s
TRANSFORMS-drop=esa_header_drop
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;transforms.conf&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;[esa_header_drop]
REGEX=^.*Info:\s(Begin\sLogfile|Logfile\srolled\sover|End\sLogfile|Version\:\s|Time\soffset\sfrom\sUTC\:).*
DEST_KEY=queue
FORMAT=nullQueue
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;I log amp, mail, gui, auth, http, via scp. They all have the props above, individually configured. &lt;/P&gt;

&lt;P&gt;Sample event (good and bad) from one, full auth file:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;le
Tue Jun  6 15:12:22 2017 Info: A publickey authentication attempt by the user ***** from 0.0.0.0 failed using an SSH connection.
Tue Jun  6 15:10:40 2017 Info: logout:10.0.0.1 user:- session:blahgfregre 
Tue Jun  6 15:09:28 2017 Info: logout:10.0.0.25 user:- blaggj4iogjio3 
Tue Jun  6 15:09:20 2017 Info: A publickey authentication attempt by the user ***** from 0.0.0.0 failed using an SSH connection.
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;Below is a screenshot of search finding one event, that gets mangled and duplicated (using the suggestion break before  regex). Checking the file, there is only one event on one line, no idea why splunk is doing this. &lt;/P&gt;

&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper" image-alt="alt text"&gt;&lt;img src="https://community.splunk.com/t5/image/serverpage/image-id/3041i8923A35B1C0F4866/image-size/large?v=v2&amp;amp;px=999" role="button" title="alt text" alt="alt text" /&gt;&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 06 Jun 2017 16:27:31 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/Sending-logs-over-scp-to-heavy-forwarder-why-does-splunk-mangle/m-p/338106#M40668</guid>
      <dc:creator>JSkier</dc:creator>
      <dc:date>2017-06-06T16:27:31Z</dc:date>
    </item>
    <item>
      <title>Re: Sending logs over scp to heavy forwarder, why does splunk mangle, improperly break some of the events?</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/Sending-logs-over-scp-to-heavy-forwarder-why-does-splunk-mangle/m-p/338107#M40669</link>
      <description>&lt;P&gt;One thing I can think of is that the files come in slowly and Splunk breaks too early, therefore you get garbage lines. Have a look at the docs &lt;A href="http://docs.splunk.com/Documentation/Splunk/latest/Admin/Inputsconf"&gt;http://docs.splunk.com/Documentation/Splunk/latest/Admin/Inputsconf&lt;/A&gt; of &lt;CODE&gt;inputs.conf&lt;/CODE&gt; for the options&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;time_before_close = &amp;lt;integer&amp;gt;
* Modification time delta required before the file monitor can close a file on
  EOF.
* Tells the system not to close files that have been updated in past &amp;lt;integer&amp;gt;
  seconds.
* Defaults to 3.

multiline_event_extra_waittime = [true|false]
* By default, the file monitor sends an event delimiter when:
  * It reaches EOF of a file it monitors and
  * Ihe last character it reads is a newline.
* In some cases, it takes time for all lines of a multiple-line event to
  arrive.
* Set to true to delay sending an event delimiter until the time that the
  file monitor closes the file, as defined by the 'time_before_close' setting,
  to allow all event lines to arrive.
* Defaults to false.
&lt;/CODE&gt;&lt;/PRE&gt;</description>
      <pubDate>Tue, 06 Jun 2017 16:33:04 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/Sending-logs-over-scp-to-heavy-forwarder-why-does-splunk-mangle/m-p/338107#M40669</guid>
      <dc:creator>MuS</dc:creator>
      <dc:date>2017-06-06T16:33:04Z</dc:date>
    </item>
    <item>
      <title>Re: Sending logs over scp to heavy forwarder, why does splunk mangle, improperly break some of the events?</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/Sending-logs-over-scp-to-heavy-forwarder-why-does-splunk-mangle/m-p/338108#M40670</link>
      <description>&lt;P&gt;I've tried time_before_close = 120 which didn't do anything.&lt;/P&gt;

&lt;P&gt;I'll try the multiline option, however all events are one line and I have line merge disabled (as it should be; all events are one line). &lt;/P&gt;

&lt;P&gt;I turned on time before close and added the multi-line option, I'll see what happens. &lt;/P&gt;</description>
      <pubDate>Tue, 29 Sep 2020 14:23:16 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/Sending-logs-over-scp-to-heavy-forwarder-why-does-splunk-mangle/m-p/338108#M40670</guid>
      <dc:creator>JSkier</dc:creator>
      <dc:date>2020-09-29T14:23:16Z</dc:date>
    </item>
    <item>
      <title>Re: Sending logs over scp to heavy forwarder, why does splunk mangle, improperly break some of the events?</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/Sending-logs-over-scp-to-heavy-forwarder-why-does-splunk-mangle/m-p/338109#M40671</link>
      <description>&lt;P&gt;Nope, still happening. &lt;/P&gt;</description>
      <pubDate>Tue, 06 Jun 2017 17:02:53 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/Sending-logs-over-scp-to-heavy-forwarder-why-does-splunk-mangle/m-p/338109#M40671</guid>
      <dc:creator>JSkier</dc:creator>
      <dc:date>2017-06-06T17:02:53Z</dc:date>
    </item>
    <item>
      <title>Re: Sending logs over scp to heavy forwarder, why does splunk mangle, improperly break some of the events?</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/Sending-logs-over-scp-to-heavy-forwarder-why-does-splunk-mangle/m-p/338110#M40672</link>
      <description>&lt;P&gt;can you post the &lt;CODE&gt;props.conf&lt;/CODE&gt; settings for the used sourcetype and some sample events of good and bad events? &lt;/P&gt;</description>
      <pubDate>Tue, 06 Jun 2017 18:09:19 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/Sending-logs-over-scp-to-heavy-forwarder-why-does-splunk-mangle/m-p/338110#M40672</guid>
      <dc:creator>MuS</dc:creator>
      <dc:date>2017-06-06T18:09:19Z</dc:date>
    </item>
    <item>
      <title>Re: Sending logs over scp to heavy forwarder, why does splunk mangle, improperly break some of the events?</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/Sending-logs-over-scp-to-heavy-forwarder-why-does-splunk-mangle/m-p/338111#M40673</link>
      <description>&lt;P&gt;Changed in edit above. &lt;/P&gt;</description>
      <pubDate>Tue, 06 Jun 2017 19:19:46 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/Sending-logs-over-scp-to-heavy-forwarder-why-does-splunk-mangle/m-p/338111#M40673</guid>
      <dc:creator>JSkier</dc:creator>
      <dc:date>2017-06-06T19:19:46Z</dc:date>
    </item>
    <item>
      <title>Re: Sending logs over scp to heavy forwarder, why does splunk mangle, improperly break some of the events?</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/Sending-logs-over-scp-to-heavy-forwarder-why-does-splunk-mangle/m-p/338112#M40674</link>
      <description>&lt;P&gt;Why do you say there is no line breaking? The default line breaking is still happening.&lt;BR /&gt;
Can you please add the raw event for one or two bad examples? There must be something triggering the line break or truncate ....&lt;/P&gt;</description>
      <pubDate>Tue, 06 Jun 2017 19:53:59 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/Sending-logs-over-scp-to-heavy-forwarder-why-does-splunk-mangle/m-p/338112#M40674</guid>
      <dc:creator>MuS</dc:creator>
      <dc:date>2017-06-06T19:53:59Z</dc:date>
    </item>
    <item>
      <title>Re: Sending logs over scp to heavy forwarder, why does splunk mangle, improperly break some of the events?</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/Sending-logs-over-scp-to-heavy-forwarder-why-does-splunk-mangle/m-p/338113#M40675</link>
      <description>&lt;P&gt;I threw in one file parsed by splunk as an example. Top entry is bad (also have unable to parse timestamp error in splunkd.log as well). &lt;/P&gt;

&lt;P&gt;When I say no line breaking, all events are always one line, splunk is doing the line breaking (why, I don't know, that's why I'm here). Support is scratching their heads on this so far too. &lt;/P&gt;</description>
      <pubDate>Tue, 06 Jun 2017 20:28:33 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/Sending-logs-over-scp-to-heavy-forwarder-why-does-splunk-mangle/m-p/338113#M40675</guid>
      <dc:creator>JSkier</dc:creator>
      <dc:date>2017-06-06T20:28:33Z</dc:date>
    </item>
    <item>
      <title>Re: Sending logs over scp to heavy forwarder, why does splunk mangle, improperly break some of the events?</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/Sending-logs-over-scp-to-heavy-forwarder-why-does-splunk-mangle/m-p/338114#M40676</link>
      <description>&lt;P&gt;I would try using BREAK_ONLY_BEFORE = RegEx&lt;/P&gt;

&lt;P&gt;Use the RegEx for your timestamp here (write it yourself), so your events should only get split up when a new timestamp occurs.&lt;/P&gt;

&lt;P&gt;Skalli&lt;/P&gt;</description>
      <pubDate>Tue, 29 Sep 2020 14:19:32 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/Sending-logs-over-scp-to-heavy-forwarder-why-does-splunk-mangle/m-p/338114#M40676</guid>
      <dc:creator>skalliger</dc:creator>
      <dc:date>2020-09-29T14:19:32Z</dc:date>
    </item>
    <item>
      <title>Re: Sending logs over scp to heavy forwarder, why does splunk mangle, improperly break some of the events?</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/Sending-logs-over-scp-to-heavy-forwarder-why-does-splunk-mangle/m-p/338115#M40677</link>
      <description>&lt;P&gt;Thanks for the suggestion, this also doesn't work (still breaks and ends randomly). &lt;/P&gt;

&lt;P&gt;Thinking this is a bug with monitoring feature on the heavy forwarder, but I'm open to more suggestions. &lt;/P&gt;

&lt;P&gt;A side note, elasticsearch and filebeat sending the events from the same box don't have this problem at all; seems indicative that this is a splunk bug. Either way, once I find something that works through support, community, or just banging my head against the splunk wall until it works, I'll post it here. &lt;/P&gt;</description>
      <pubDate>Wed, 07 Jun 2017 15:01:57 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/Sending-logs-over-scp-to-heavy-forwarder-why-does-splunk-mangle/m-p/338115#M40677</guid>
      <dc:creator>JSkier</dc:creator>
      <dc:date>2017-06-07T15:01:57Z</dc:date>
    </item>
    <item>
      <title>Re: Sending logs over scp to heavy forwarder, why does splunk mangle, improperly break some of the events?</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/Sending-logs-over-scp-to-heavy-forwarder-why-does-splunk-mangle/m-p/338116#M40678</link>
      <description>&lt;P&gt;I dont think its a bug, i think its the write buffers in your file. If you tail the file, you might even visually see the break. If the breaks happen often, fire up a tail -f and grab a coffee lol&lt;/P&gt;

&lt;P&gt;You can confirm this by manually indexing the file manually AFTER it has been completely copied to the dir (oneshot) or just scp to ur desktop and upload the data into the add data wiz to confirm your props are all good&lt;/P&gt;

&lt;P&gt;If the linebreaking is all good, then you have proven that all your config is fine, and you are dealing with a timing issue. &lt;/P&gt;

&lt;P&gt;I would suggest, an easy way around this is to use a cronjob to move the files from the landing dir, to a different dir splunk is monitoring and I bet you dont see this issue. &lt;/P&gt;

&lt;P&gt;I dont know enough ( or care to investigate) how Elastic and filebeat monitor the files, but Splunk is doing a live tail of the file so the time_before_close and multiline should get you close, but the main issue here is it can be an issue when monitoring a file AS it is being written. &lt;/P&gt;</description>
      <pubDate>Tue, 29 Sep 2020 14:23:55 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/Sending-logs-over-scp-to-heavy-forwarder-why-does-splunk-mangle/m-p/338116#M40678</guid>
      <dc:creator>mattymo</dc:creator>
      <dc:date>2020-09-29T14:23:55Z</dc:date>
    </item>
    <item>
      <title>Re: Sending logs over scp to heavy forwarder, why does splunk mangle, improperly break some of the events?</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/Sending-logs-over-scp-to-heavy-forwarder-why-does-splunk-mangle/m-p/338117#M40679</link>
      <description>&lt;P&gt;When I uploaded to the dev box, it works fine. I think you're on to something with the ESA uploading via scp several files and splunk ingesting them in near real-time. I used time_before_close=120 and it didn't do anything. &lt;/P&gt;

&lt;P&gt;I just had a short file come in while tailing, about 10 lines that splunk mangled twice. The tail -f command showed the file as it is on the filesystem (correctly), so me tailing the file isn't showing a problem. &lt;/P&gt;

&lt;P&gt;What is odd is, the WSA (same Cisco appliance OS) sends the logs over in a similar fashion and doesn't have this problem at all (we've had this running for years now). Not sure why now it is all of a sudden a problem. &lt;/P&gt;</description>
      <pubDate>Tue, 29 Sep 2020 14:20:10 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/Sending-logs-over-scp-to-heavy-forwarder-why-does-splunk-mangle/m-p/338117#M40679</guid>
      <dc:creator>JSkier</dc:creator>
      <dc:date>2020-09-29T14:20:10Z</dc:date>
    </item>
    <item>
      <title>Re: Sending logs over scp to heavy forwarder, why does splunk mangle, improperly break some of the events?</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/Sending-logs-over-scp-to-heavy-forwarder-why-does-splunk-mangle/m-p/338118#M40680</link>
      <description>&lt;P&gt;out of curiosity, if you find a broken event, then go back to the HF and find the real event, you can search those pieces in the index and examine the _indextime to see what the gap was....would be interesting to see if the pieces are on more than one indexer or both...&lt;/P&gt;

&lt;P&gt;in the meantime quick cron should solve this for you while you continue to poke...&lt;/P&gt;</description>
      <pubDate>Wed, 07 Jun 2017 19:51:17 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/Sending-logs-over-scp-to-heavy-forwarder-why-does-splunk-mangle/m-p/338118#M40680</guid>
      <dc:creator>mattymo</dc:creator>
      <dc:date>2017-06-07T19:51:17Z</dc:date>
    </item>
    <item>
      <title>Re: Sending logs over scp to heavy forwarder, why does splunk mangle, improperly break some of the events?</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/Sending-logs-over-scp-to-heavy-forwarder-why-does-splunk-mangle/m-p/338119#M40681</link>
      <description>&lt;P&gt;Sure thing. It's happening on both indexers, and the index time is always a few seconds after the original, full event. So, to splunk, it gets queued up as an event without a timestamp. This shows up in splunkd.log, so it gives it a later timestamp based off of the last event because it couldn't find a timestamp. Hopefully that answers your question. &lt;/P&gt;

&lt;P&gt;I agree, I'll work on a script to send these over to another folder after modtime hits a certain point. Sounds like a reasonable temporary workaround until a solution is found. &lt;/P&gt;</description>
      <pubDate>Wed, 07 Jun 2017 20:55:53 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/Sending-logs-over-scp-to-heavy-forwarder-why-does-splunk-mangle/m-p/338119#M40681</guid>
      <dc:creator>JSkier</dc:creator>
      <dc:date>2017-06-07T20:55:53Z</dc:date>
    </item>
    <item>
      <title>Re: Sending logs over scp to heavy forwarder, why does splunk mangle, improperly break some of the events?</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/Sending-logs-over-scp-to-heavy-forwarder-why-does-splunk-mangle/m-p/338120#M40682</link>
      <description>&lt;P&gt;I added a script to concatenate the files to a their own respective log files after not being modified for a minute and then deleting them (only if step 1 is successful). This leaves splunk to follow these new log files completely and logrotate handles clean up. This seems to have worked for several hours. &lt;/P&gt;

&lt;P&gt;The fact that elasticsearch can monitor the files just fine, and splunk works fine with a buffer tells me there is some kind of a bug with the monitoring feature of the heavy forwarder receiving these types of logs over scp. I will keep this open and also push to have my ticket resolved appropriately.&lt;/P&gt;</description>
      <pubDate>Fri, 09 Jun 2017 18:04:36 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/Sending-logs-over-scp-to-heavy-forwarder-why-does-splunk-mangle/m-p/338120#M40682</guid>
      <dc:creator>JSkier</dc:creator>
      <dc:date>2017-06-09T18:04:36Z</dc:date>
    </item>
    <item>
      <title>Re: Sending logs over scp to heavy forwarder, why does splunk mangle, improperly break some of the events?</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/Sending-logs-over-scp-to-heavy-forwarder-why-does-splunk-mangle/m-p/338121#M40683</link>
      <description>&lt;P&gt;i'm not sure this is a valid comparison at all. You have already stated that other sources have no issue. Please do follow up with support (i have a similar ticket open) but logging live files heavily depends on the process writing the logs. I know we love to call all unknowns "bugs" round here, but lets see if we can get some better proof.&lt;/P&gt;

&lt;P&gt;also, are u saying that elastic is monitoring this exact same file??&lt;/P&gt;</description>
      <pubDate>Fri, 09 Jun 2017 21:30:45 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/Sending-logs-over-scp-to-heavy-forwarder-why-does-splunk-mangle/m-p/338121#M40683</guid>
      <dc:creator>mattymo</dc:creator>
      <dc:date>2017-06-09T21:30:45Z</dc:date>
    </item>
    <item>
      <title>Re: Sending logs over scp to heavy forwarder, why does splunk mangle, improperly break some of the events?</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/Sending-logs-over-scp-to-heavy-forwarder-why-does-splunk-mangle/m-p/338122#M40684</link>
      <description>&lt;P&gt;SSH has pretty bad performance, and therefore, so does scp. Once the network buffer is full, it's slow slow slow. There are a couple of ways around this: a) use the high performance network patches for SSH available here &lt;A href="https://www.psc.edu/hpn-ssh"&gt;PSC HPN Patches&lt;/A&gt; (I use these myself on servers which do large file transfers), or switch to an ftp over SSL implementation, netcat, or similar.&lt;/P&gt;

&lt;P&gt;For example, I had a customer using straight ftp to constantly transfer high volume firewall logs like this without issue.&lt;/P&gt;

&lt;P&gt;I'm not even sure the HPN patches would do what is needed in this case, because of connection pauses. If logs are not coming continuously through that pipe, then you're going to have the connection restart overhead throwing off your &lt;CODE&gt;time_before_close&lt;/CODE&gt;. So maybe add something like the following to &lt;CODE&gt;/etc/ssh/sshd_config&lt;/CODE&gt;:&lt;/P&gt;

&lt;P&gt;ClientAliveInterval 120&lt;BR /&gt;
ClientAliveCountMax 720&lt;/P&gt;

&lt;P&gt;I presume there's some sort of security policy to prevent you from just using syslog here?&lt;/P&gt;</description>
      <pubDate>Fri, 09 Jun 2017 22:07:03 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/Sending-logs-over-scp-to-heavy-forwarder-why-does-splunk-mangle/m-p/338122#M40684</guid>
      <dc:creator>nnmiller</dc:creator>
      <dc:date>2017-06-09T22:07:03Z</dc:date>
    </item>
    <item>
      <title>Re: Sending logs over scp to heavy forwarder, why does splunk mangle, improperly break some of the events?</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/Sending-logs-over-scp-to-heavy-forwarder-why-does-splunk-mangle/m-p/338123#M40685</link>
      <description>&lt;P&gt;hey side note, are you using forceTimeBasedAutoLB on this HF??&lt;/P&gt;

&lt;P&gt;My client was, and I thought I had proved it was unrelated, but I am going to revisit and turn it off. Technically its useless on an HF as I understand anyways...&lt;/P&gt;</description>
      <pubDate>Sat, 10 Jun 2017 01:37:58 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/Sending-logs-over-scp-to-heavy-forwarder-why-does-splunk-mangle/m-p/338123#M40685</guid>
      <dc:creator>mattymo</dc:creator>
      <dc:date>2017-06-10T01:37:58Z</dc:date>
    </item>
    <item>
      <title>Re: Sending logs over scp to heavy forwarder, why does splunk mangle, improperly break some of the events?</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/Sending-logs-over-scp-to-heavy-forwarder-why-does-splunk-mangle/m-p/338124#M40686</link>
      <description>&lt;P&gt;I do have "autolb" set on the output conf, but it seems 6.6.1 doesn't really even support that anymore... Is that what you're referring to? &lt;/P&gt;

&lt;P&gt;We have two indexers, when PS set this up, it was before 6.4 came along with better support. &lt;/P&gt;</description>
      <pubDate>Sat, 10 Jun 2017 01:49:53 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/Sending-logs-over-scp-to-heavy-forwarder-why-does-splunk-mangle/m-p/338124#M40686</guid>
      <dc:creator>JSkier</dc:creator>
      <dc:date>2017-06-10T01:49:53Z</dc:date>
    </item>
    <item>
      <title>Re: Sending logs over scp to heavy forwarder, why does splunk mangle, improperly break some of the events?</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/Sending-logs-over-scp-to-heavy-forwarder-why-does-splunk-mangle/m-p/338125#M40687</link>
      <description>&lt;P&gt;Thanks, I didn't think of hpn patches for splunk, but I've used them before for other large transfers. I'll look at giving that, and the configuration suggestions you mentioned a shot for sure. &lt;/P&gt;

&lt;P&gt;Yeah, it's a cloud appliance, so no dice sending that stuff in the clear. I use TLS for syslog-ng receiver when I can, but, Cisco doesn't support it for sending from their appliances. It's pretty well wrapped up as an appliance, otherwise I'd say TLS netcat would be worth a go to. &lt;/P&gt;

&lt;P&gt;I understand the ssh performance side, but again, elasticsearch beats handling it without issue really points me back to splunk as being the problem. However, with the WSA, that is on prem (also doing scp - because of syslog message length limitations) and it doesn't have these issues. So, it's possible splunk just doesn't like how long it takes to fully copy the file via scp. &lt;/P&gt;</description>
      <pubDate>Sat, 10 Jun 2017 01:54:50 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/Sending-logs-over-scp-to-heavy-forwarder-why-does-splunk-mangle/m-p/338125#M40687</guid>
      <dc:creator>JSkier</dc:creator>
      <dc:date>2017-06-10T01:54:50Z</dc:date>
    </item>
  </channel>
</rss>

