<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Why is there significant sudden slowness in ingesting between servers with Splunk UF to Splunk Heavy Forwarders? in Getting Data In</title>
    <link>https://community.splunk.com/t5/Getting-Data-In/Why-is-there-significant-sudden-slowness-in-ingesting-between/m-p/444370#M77355</link>
    <description>&lt;P&gt;We see this problem all the time and it is usually due to there being way too many files co-resident with the files that you are monitoring.  This typically happens because there is no housekeeping, or very languishing policy for deleting the files as they rotate.  Yes, even if you are not monitoring the rotated files, they will eventually slow the forwarder down to a crawl.  It usually starts when you have hundreds of files and you are crippled by the time you get to thousands.  If you cannot delete the files that are way old and done, then you can create soft links to fresh files in another directory.  Let me know if you need details on how to do that.&lt;/P&gt;</description>
    <pubDate>Thu, 02 May 2019 03:41:21 GMT</pubDate>
    <dc:creator>woodcock</dc:creator>
    <dc:date>2019-05-02T03:41:21Z</dc:date>
    <item>
      <title>Why is there significant sudden slowness in ingesting between servers with Splunk UF to Splunk Heavy Forwarders?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-is-there-significant-sudden-slowness-in-ingesting-between/m-p/444361#M77346</link>
      <description>&lt;P&gt;Hi ,&lt;BR /&gt;
Looking for an advice in troubleshooting the cause of the issue we are experiencing and how to solve it.&lt;/P&gt;

&lt;P&gt;We have few Splunk UF(s) where we are monitoring large amount of big files to our 4 load balanced Heavy Forwarders.&lt;BR /&gt;
The setup we have  was working  until last week when we have started to ingest the files with big delay ,3-6 hrs depending on the size. Previously it was taking minutes to ingest.&lt;/P&gt;

&lt;P&gt;Best to our knowledge we didn't have any network, OS or Splunk related changes on the day when we started to experience the issue.&lt;/P&gt;

&lt;P&gt;We tried:&lt;BR /&gt;
1. Restart Splunk process on Splunk UF servers&lt;BR /&gt;
2. Reboot the servers with Splunk UF&lt;BR /&gt;
3. Per Splunk support we changed server.conf  on Splunk UF server:&lt;BR /&gt;
by adding parallelIngestionPipelines and queue sizes&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;parallelIngestionPipelines = 2
[queue]
maxSize = 1GB
[queue=aq]
maxSize = 20MB
[queue=aeq]
maxSize = 20MB
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;OL&gt;
&lt;LI&gt;&lt;P&gt;Per Splunk support we modified limits.conf&lt;BR /&gt;
by adding max_fd and we had thruput set to unlimited already  &lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;[thruput]
maxKBps = 0
[inputproc]
max_fd = 200
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;All above didn't fix the issue.&lt;BR /&gt;
Maybe you have experienced the similar issue. It would be great to know how it was solved&lt;BR /&gt;
Any advice will be appreciated!&lt;/P&gt;&lt;/LI&gt;
&lt;/OL&gt;</description>
      <pubDate>Wed, 01 May 2019 12:10:15 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-is-there-significant-sudden-slowness-in-ingesting-between/m-p/444361#M77346</guid>
      <dc:creator>mlevsh</dc:creator>
      <dc:date>2019-05-01T12:10:15Z</dc:date>
    </item>
    <item>
      <title>Re: Why is there significant sudden slowness in ingesting between servers with Splunk UF to Splunk Heavy Forwarders?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-is-there-significant-sudden-slowness-in-ingesting-between/m-p/444362#M77347</link>
      <description>&lt;P&gt;I think since you have support case with splunk, it would be good to take their advice, as they can review your config and server setup.&lt;/P&gt;

&lt;P&gt;Having said that, large flat files, go through batch process/pipeline and it does take a while to see them at indexer/search head. Any chance of creating small files, may be at more frequent intervals, as opposed to one or two very large files in a day?  Smaller files gets parsed /processed quickly and you should still be able to achieve the same expected results.&lt;/P&gt;</description>
      <pubDate>Wed, 01 May 2019 12:20:19 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-is-there-significant-sudden-slowness-in-ingesting-between/m-p/444362#M77347</guid>
      <dc:creator>lakshman239</dc:creator>
      <dc:date>2019-05-01T12:20:19Z</dc:date>
    </item>
    <item>
      <title>Re: Why is there significant sudden slowness in ingesting between servers with Splunk UF to Splunk Heavy Forwarders?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-is-there-significant-sudden-slowness-in-ingesting-between/m-p/444363#M77348</link>
      <description>&lt;P&gt;@lakshman239, best to my knowledge we  cannot create smaller files. But I will verify that&lt;/P&gt;</description>
      <pubDate>Wed, 01 May 2019 12:28:49 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-is-there-significant-sudden-slowness-in-ingesting-between/m-p/444363#M77348</guid>
      <dc:creator>mlevsh</dc:creator>
      <dc:date>2019-05-01T12:28:49Z</dc:date>
    </item>
    <item>
      <title>Re: Why is there significant sudden slowness in ingesting between servers with Splunk UF to Splunk Heavy Forwarders?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-is-there-significant-sudden-slowness-in-ingesting-between/m-p/444364#M77349</link>
      <description>&lt;P&gt;When you added parallelIngestionPipelines to server.conf on the forwarders, did you also update the indexers? The default value is 1, so increasing the value on the forwarders without increasing it on the indexers will gain you no performance increase.&lt;/P&gt;

&lt;P&gt;Also, have you checked the ulimit settings for the Splunk user and/or daemon? If not, you may want to check those, especially the open files limit. The OS default is generally 1024, which is way too low for Splunk.&lt;/P&gt;</description>
      <pubDate>Wed, 01 May 2019 14:05:20 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-is-there-significant-sudden-slowness-in-ingesting-between/m-p/444364#M77349</guid>
      <dc:creator>codebuilder</dc:creator>
      <dc:date>2019-05-01T14:05:20Z</dc:date>
    </item>
    <item>
      <title>Re: Why is there significant sudden slowness in ingesting between servers with Splunk UF to Splunk Heavy Forwarders?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-is-there-significant-sudden-slowness-in-ingesting-between/m-p/444365#M77350</link>
      <description>&lt;P&gt;@codebuilder, &lt;/P&gt;

&lt;P&gt;1) All our indexers are on the Splunk Cloud so we don't have access to it. Have to check what the parallelIngestionPipelines value is with Cloud Support&lt;/P&gt;

&lt;P&gt;2) Regarding the ulimit value for open files. During first few days of the issue "ulimit -n" was showing as set to 64000 on Splunk UF server.&lt;BR /&gt;
At some point we rebooted it and it went down to 1024 after reboot for some reason. &lt;BR /&gt;
Per our Unix sys admin it is set to 64000 in /etc/security/limits.conf on system level for our splunk user&lt;/P&gt;</description>
      <pubDate>Wed, 01 May 2019 16:17:48 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-is-there-significant-sudden-slowness-in-ingesting-between/m-p/444365#M77350</guid>
      <dc:creator>mlevsh</dc:creator>
      <dc:date>2019-05-01T16:17:48Z</dc:date>
    </item>
    <item>
      <title>Re: Why is there significant sudden slowness in ingesting between servers with Splunk UF to Splunk Heavy Forwarders?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-is-there-significant-sudden-slowness-in-ingesting-between/m-p/444366#M77351</link>
      <description>&lt;P&gt;I suspected that may be the case. Your ulimit configurations were not honored and reverted back to the OS defaults upon reboot (expected behavior).&lt;/P&gt;

&lt;P&gt;Depending on your OS flavor and version there a number of methods to resolve this.&lt;BR /&gt;
You can create a splunk specific config by creating a file at /etc/security/limits.d/ and name the file with a number higher than what exists there now, 90-splunk.conf e.g.&lt;/P&gt;

&lt;P&gt;Or you add the limits directly to the start function in the init.d script as such:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;cat /etc/init.d/splunk

splunk_start() {
  ulimit -Sn 64000
  ulimit -Hn 100000
  ulimit -Su 8192
  ulimit -Hu 16000
  echo Starting Splunk...
  "/opt/splunk/bin/splunk" start --no-prompt --answer-yes
  RETVAL=$?
  [ $RETVAL -eq 0 ] &amp;amp;&amp;amp; touch /var/lock/subsys/splunk
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;In either case you'll need to cycle splunk in order to pick up the "new" limits.&lt;BR /&gt;
You can also set them via systemd, but depending on your version of Splunk this can be a pain. I prefer to just drop them in the init.d script, it's proven the most reliable.&lt;/P&gt;</description>
      <pubDate>Wed, 01 May 2019 16:29:06 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-is-there-significant-sudden-slowness-in-ingesting-between/m-p/444366#M77351</guid>
      <dc:creator>codebuilder</dc:creator>
      <dc:date>2019-05-01T16:29:06Z</dc:date>
    </item>
    <item>
      <title>Re: Why is there significant sudden slowness in ingesting between servers with Splunk UF to Splunk Heavy Forwarders?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-is-there-significant-sudden-slowness-in-ingesting-between/m-p/444367#M77352</link>
      <description>&lt;P&gt;There are a couple of methods to verify the ulimits took effect.&lt;/P&gt;

&lt;P&gt;Check ulimits as splunk user&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;su - splunk
ulimit -a
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;Check via PID:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;ps -ef |grep -i splunk (copy any of the PID's in the output)
cat /proc/splunk_pid/limits
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;Worth noting for you or your admin, setting ulimits via /etc/security/limits.conf is generally considered deprecated on RHEL/Centos 7.x (or any systemd based OS).&lt;/P&gt;

&lt;P&gt;The preferred method is via conf files located at /etc/security/limits.d/&lt;BR /&gt;
When the OS boots up, /etc/security/limits.conf is read first, then each file under /etc/security/limits.d/ is read sequentially and can/will override any previous files (with 99 being the highest).&lt;/P&gt;

&lt;P&gt;Meaning, any limits set in /etc/security/limits.d/99-mylimits.conf will override all previous settings. I suspect something similar happened in your case.&lt;/P&gt;</description>
      <pubDate>Wed, 01 May 2019 16:33:31 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-is-there-significant-sudden-slowness-in-ingesting-between/m-p/444367#M77352</guid>
      <dc:creator>codebuilder</dc:creator>
      <dc:date>2019-05-01T16:33:31Z</dc:date>
    </item>
    <item>
      <title>Re: Why is there significant sudden slowness in ingesting between servers with Splunk UF to Splunk Heavy Forwarders?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-is-there-significant-sudden-slowness-in-ingesting-between/m-p/444368#M77353</link>
      <description>&lt;P&gt;@codebuilder , thank you for the detail reply!&lt;/P&gt;

&lt;P&gt;ulimit -a from command line as splunk user shows the correct 64000 value, but splunkd.log on reboot shows that Splunk determined that ulimit -n is set to default 1024.&lt;/P&gt;

&lt;P&gt;Did use "/etc/init.d/splunk" method previously.&lt;/P&gt;

&lt;P&gt;What bothers me - that server was rebooted before as well, but ulimit -n value was still 64000 according to splunkd.log.&lt;BR /&gt;
So why the sudden switch to ulimit -n 1024 default this time.&lt;/P&gt;</description>
      <pubDate>Wed, 01 May 2019 18:48:52 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-is-there-significant-sudden-slowness-in-ingesting-between/m-p/444368#M77353</guid>
      <dc:creator>mlevsh</dc:creator>
      <dc:date>2019-05-01T18:48:52Z</dc:date>
    </item>
    <item>
      <title>Re: Why is there significant sudden slowness in ingesting between servers with Splunk UF to Splunk Heavy Forwarders?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-is-there-significant-sudden-slowness-in-ingesting-between/m-p/444369#M77354</link>
      <description>&lt;P&gt;Glad to help, and I fought the same issue myself previously, and only on reboots.&lt;/P&gt;

&lt;P&gt;The underlying problem is that Splunk is running under init.d on a systemd system and limits are applied differently  than the older init.d. &lt;/P&gt;

&lt;P&gt;The sequence in which limits are read and applied by the kernel and process are out of sync on reboot so it falls back to the OS defaults.&lt;/P&gt;

&lt;P&gt;You can solve it by creating a systemd unit file for splunk as it should technically be configured, but placing the limits in the startup script solved the issue for me. &lt;/P&gt;

&lt;P&gt;Also, I would consider the cat /proc/splunk_pid/limits method as the definitive source of truth for what limits have been applied to the process. Hope this helps.&lt;/P&gt;</description>
      <pubDate>Wed, 01 May 2019 19:27:37 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-is-there-significant-sudden-slowness-in-ingesting-between/m-p/444369#M77354</guid>
      <dc:creator>codebuilder</dc:creator>
      <dc:date>2019-05-01T19:27:37Z</dc:date>
    </item>
    <item>
      <title>Re: Why is there significant sudden slowness in ingesting between servers with Splunk UF to Splunk Heavy Forwarders?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-is-there-significant-sudden-slowness-in-ingesting-between/m-p/444370#M77355</link>
      <description>&lt;P&gt;We see this problem all the time and it is usually due to there being way too many files co-resident with the files that you are monitoring.  This typically happens because there is no housekeeping, or very languishing policy for deleting the files as they rotate.  Yes, even if you are not monitoring the rotated files, they will eventually slow the forwarder down to a crawl.  It usually starts when you have hundreds of files and you are crippled by the time you get to thousands.  If you cannot delete the files that are way old and done, then you can create soft links to fresh files in another directory.  Let me know if you need details on how to do that.&lt;/P&gt;</description>
      <pubDate>Thu, 02 May 2019 03:41:21 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-is-there-significant-sudden-slowness-in-ingesting-between/m-p/444370#M77355</guid>
      <dc:creator>woodcock</dc:creator>
      <dc:date>2019-05-02T03:41:21Z</dc:date>
    </item>
    <item>
      <title>Re: Why is there significant sudden slowness in ingesting between servers with Splunk UF to Splunk Heavy Forwarders?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-is-there-significant-sudden-slowness-in-ingesting-between/m-p/444371#M77356</link>
      <description>&lt;P&gt;@codebuilder , @woodcock , @lakshman239 &lt;BR /&gt;
Just an update how the issue was solved in our case.&lt;BR /&gt;
After ruling out that the cause was any Splunk issue/configuration , our network engineer made some configuration changes across  replication between different data centers in addition to the change of  wan routing preference.&lt;/P&gt;</description>
      <pubDate>Tue, 14 May 2019 14:11:20 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-is-there-significant-sudden-slowness-in-ingesting-between/m-p/444371#M77356</guid>
      <dc:creator>mlevsh</dc:creator>
      <dc:date>2019-05-14T14:11:20Z</dc:date>
    </item>
    <item>
      <title>Re: Why is there significant sudden slowness in ingesting between servers with Splunk UF to Splunk Heavy Forwarders?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-is-there-significant-sudden-slowness-in-ingesting-between/m-p/444372#M77357</link>
      <description>&lt;P&gt;Ah, so slow network.  Yes, that will kill things.  Please do click &lt;CODE&gt;Accept&lt;/CODE&gt; on your answer here to close the question.&lt;/P&gt;</description>
      <pubDate>Tue, 14 May 2019 14:41:29 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-is-there-significant-sudden-slowness-in-ingesting-between/m-p/444372#M77357</guid>
      <dc:creator>woodcock</dc:creator>
      <dc:date>2019-05-14T14:41:29Z</dc:date>
    </item>
    <item>
      <title>Re: Why is there significant sudden slowness in ingesting between servers with Splunk UF to Splunk Heavy Forwarders?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-is-there-significant-sudden-slowness-in-ingesting-between/m-p/669131#M112184</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.splunk.com/t5/user/viewprofilepage/user-id/1406"&gt;@woodcock&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;&lt;SPAN class=""&gt;Please tell me how to do this configuration&lt;BR /&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;/PRE&gt;&lt;PRE&gt;&lt;SPAN class=""&gt;How long and whether we can set how long the log is kept ?&lt;/SPAN&gt;&lt;/PRE&gt;&lt;PRE&gt;&lt;SPAN class=""&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/PRE&gt;</description>
      <pubDate>Mon, 20 Nov 2023 03:06:41 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-is-there-significant-sudden-slowness-in-ingesting-between/m-p/669131#M112184</guid>
      <dc:creator>aldi_mukti</dc:creator>
      <dc:date>2023-11-20T03:06:41Z</dc:date>
    </item>
  </channel>
</rss>

