<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Getting TailReader - File descriptor cache is full (100), trimming in one of the splunk heavyforwarder ? How to fix this issue. in Splunk Search</title>
    <link>https://community.splunk.com/t5/Splunk-Search/Getting-TailReader-File-descriptor-cache-is-full-100-trimming-in/m-p/224413#M188288</link>
    <description>&lt;P&gt;I was eluding to you checking how many files your inputs are monitoring.&lt;/P&gt;

&lt;P&gt;if you are monitoring entire directories, for example /home/stats/servers/*&lt;/P&gt;

&lt;P&gt;You can check under settings &amp;gt; inputs &amp;gt; files and directories, splunk will show you how many files are being monitored&lt;/P&gt;

&lt;P&gt;obviously this depends on what kind of inputs you are using.&lt;/P&gt;</description>
    <pubDate>Sat, 20 Aug 2016 15:41:27 GMT</pubDate>
    <dc:creator>mattymo</dc:creator>
    <dc:date>2016-08-20T15:41:27Z</dc:date>
    <item>
      <title>Getting TailReader - File descriptor cache is full (100), trimming in one of the splunk heavyforwarder ? How to fix this issue.</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Getting-TailReader-File-descriptor-cache-is-full-100-trimming-in/m-p/224402#M188277</link>
      <description>&lt;P&gt;Currently we have two heavy forwarder to configured to forward the data to the indexer.  Just wanted to know what are the files being captured from both the servers using the below query.  We are using &lt;STRONG&gt;Splunk HF version 6.4.0&lt;/STRONG&gt;  &lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;host =splunk01* sourcetype=splunkd index=_internal "*syslog*"
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;but I am getting no result found , when checked in the splunkd.log I could see this errors:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;08-11-2016 07:06:58.118 -0400 INFO  HttpPubSubConnection - Running phone uri=/services/broker/phonehome/connection_x.x.x.x_8089_splunk01.xxxx.com_splunk01.xxx.com_7xxxx1-XXXXX-XXX-XXX-XXXX
08-11-2016 07:06:58.128 -0400 INFO  HttpPubSubConnection - Running phone uri=/services/broker/phonehome/connection_x.x.x.x_8089_splunk01.xxxx.com_splunk01.xxx.com_7xxxx1-XXXXX-XXX-XXX-XXXX
08-11-2016 07:06:58.156 -0400 INFO  HttpPubSubConnection - Running phone uri=/services/broker/phonehome/connection_x.x.x.x_8089_splunk01.xxxx.com_splunk01.xxx.com_7xxxx1-XXXXX-XXX-XXX-XXXX
08-11-2016 07:07:45.496 -0400 INFO  TailReader - File descriptor cache is full (100), trimming...
08-11-2016 07:07:48.220 -0400 INFO  TcpOutputProc - Closing stream for idx=X.X.X.X:9997
08-11-2016 07:07:48.220 -0400 INFO  TcpOutputProc - Connected to idx=X.X.X.X:9997
08-11-2016 07:08:17.406 -0400 INFO  TcpOutputProc - Closing stream for idx=X.X.X.X:9997
08-11-2016 07:08:17.406 -0400 INFO  TcpOutputProc - Connected to idx=X.X.X.X:9997
08-11-2016 07:08:47.566 -0400 INFO  TcpOutputProc - Closing stream for idx=X.X.X.X:9997
08-11-2016 07:08:47.566 -0400 INFO  TcpOutputProc - Connected to idx=X.X.X.X:9997
08-11-2016 07:08:52.863 -0400 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/TA-nessus/bin/nessus2splunk.py" usage: nessus2splunk.py [-h] [-s SRCDIR] [-t TGTDIR]
08-11-2016 07:08:52.863 -0400 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/TA-nessus/bin/nessus2splunk.py" nessus2splunk.py: error: argument -s/--srcdir: Invalid path specified ($SPLUNK_HOME may not be set).
08-11-2016 07:09:17.565 -0400 INFO  TcpOutputProc - Closing stream for idx=X.X.X.45:9997
08-11-2016 07:09:17.565 -0400 INFO  TcpOutputProc - Connected to idx=X.X.X.X:9997
08-11-2016 07:09:47.859 -0400 INFO  TcpOutputProc - Closing stream for idx=X.X.X.X:9997
08-11-2016 07:09:47.958 -0400 INFO  TcpOutputProc - Connected to idx=X.X.X.X:9997
08-11-2016 07:10:18.029 -0400 INFO  TcpOutputProc - Closing stream for idx=X.X.X.X:9997
08-11-2016 07:10:18.029 -0400 INFO  TcpOutputProc - Connected to idx=X.X.X.X:9997
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;But after restarting the splunk service , I am able to get the output using the above query but it last for few min then again, there will not any data for &lt;CODE&gt;index =_internal&lt;/CODE&gt;. &lt;/P&gt;

&lt;P&gt;Kindly guide me on this to fix the issue. &lt;/P&gt;</description>
      <pubDate>Fri, 12 Aug 2016 18:53:04 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Getting-TailReader-File-descriptor-cache-is-full-100-trimming-in/m-p/224402#M188277</guid>
      <dc:creator>Hemnaath</dc:creator>
      <dc:date>2016-08-12T18:53:04Z</dc:date>
    </item>
    <item>
      <title>Re: Getting TailReader - File descriptor cache is full (100), trimming in one of the splunk heavyforwarder ? How to fix this issue.</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Getting-TailReader-File-descriptor-cache-is-full-100-trimming-in/m-p/224403#M188278</link>
      <description>&lt;P&gt;Hey Hemnaath!&lt;/P&gt;

&lt;P&gt;You should look at increasing the ulimits on your server as described in the sytem requirements.&lt;/P&gt;

&lt;P&gt;See 'Considerations regarding file descriptor limits (FDs) on *nix systems' under supported OSes.&lt;/P&gt;

&lt;P&gt;&lt;A href="http://docs.splunk.com/Documentation/Splunk/6.4.2/Installation/Systemrequirements"&gt;http://docs.splunk.com/Documentation/Splunk/6.4.2/Installation/Systemrequirements&lt;/A&gt;&lt;/P&gt;

&lt;P&gt;Also may as well ensure you have disabled THP. Another best practice.&lt;/P&gt;

&lt;P&gt;The exact change required will differ depending on your system but a quick google should lead you to the answer.  &lt;/P&gt;

&lt;P&gt;Heres some good talks on these items:&lt;/P&gt;

&lt;P&gt;&lt;A href="https://answers.splunk.com/answers/13313/how-to-tune-ulimit-on-my-server.html"&gt;https://answers.splunk.com/answers/13313/how-to-tune-ulimit-on-my-server.html&lt;/A&gt;&lt;/P&gt;

&lt;P&gt;&lt;A href="https://answers.splunk.com/answers/188875/how-do-i-disable-transparent-huge-pages-thp-and-co.html"&gt;https://answers.splunk.com/answers/188875/how-do-i-disable-transparent-huge-pages-thp-and-co.html&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 12 Aug 2016 20:08:27 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Getting-TailReader-File-descriptor-cache-is-full-100-trimming-in/m-p/224403#M188278</guid>
      <dc:creator>mattymo</dc:creator>
      <dc:date>2016-08-12T20:08:27Z</dc:date>
    </item>
    <item>
      <title>Re: Getting TailReader - File descriptor cache is full (100), trimming in one of the splunk heavyforwarder ? How to fix this issue.</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Getting-TailReader-File-descriptor-cache-is-full-100-trimming-in/m-p/224404#M188279</link>
      <description>&lt;P&gt;thanks mmodestino, for guiding us, below are the parameter value set.&lt;/P&gt;

&lt;P&gt;System details:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;Splunk version 6.4.0 (HF instance) 
OS - RedHat 6.6
Memory - 6GB
CPU - 3
VMware 
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;free -m&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;                         total     used    free    shared    buffers    cached
                Mem:     15947     8958    6988         0        732      3124
  -/+ buffers/cache:      5101    10846
               Swap:      3323       52    3271
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;I have changes the limits.conf value  under /etc/security/limits.conf  and restart the splunk service but still the changes did reflect, should I restart the servers ? &lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;#*               soft    core            0
#*               hard    rss             10000
#@student        hard    nproc           20
#@faculty        soft    nproc           20
#@faculty        hard    nproc           50
#ftp             hard    nproc           0
#@student        -       maxlogins       4
root             soft   nofile        1024000
root             hard   nofile        1024000
root             soft   nproc         180000
root             hard   nproc         180000
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;ulimit -u&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;18000
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;ulimit -n&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;102400
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;ulimit -f&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;unlimited
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;I have also  disabled THP.&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;cat /sys/kernel/mm/redhat_transparent_hugepage/defrag
always madvise [never]
cat /sys/kernel/mm/redhat_transparent_hugepage/enabled
always madvise [never]
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;But still I could see the same INFO into the splunkd.log . Kindly guide me how to fix this issue.&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;08-13-2016 01:27:04.217 -0400 INFO  TailReader - File descriptor cache is full (100), trimming...
08-13-2016 01:27:04.354 -0400 INFO  TcpOutputProc - Closing stream for idx=x.x.x.x:9997
08-13-2016 01:27:04.354 -0400 INFO  TcpOutputProc - Connected to idx=x.x.x.x:9997
08-13-2016 01:27:09.990 -0400 INFO  TailReader - File descriptor cache is full (100), trimming...
08-13-2016 01:27:13.983 -0400 INFO  TailReader - File descriptor cache is full (100), trimming...
08-13-2016 01:27:17.972 -0400 INFO  TailReader - File descriptor cache is full (100), trimming...
08-13-2016 01:27:21.381 -0400 INFO  TailReader - File descriptor cache is full (100), trimming...
08-13-2016 01:27:24.766 -0400 INFO  TailReader - File descriptor cache is full (100), trimming...
08-13-2016 01:27:28.250 -0400 INFO  TailReader - File descriptor cache is full (100), trimming...
08-13-2016 01:27:33.473 -0400 INFO  TailReader - File descriptor cache is full (100), trimming...
08-13-2016 01:27:34.355 -0400 INFO  TcpOutputProc - Closing stream for idx=x.x.x.x:9997
08-13-2016 01:27:34.355 -0400 INFO  TcpOutputProc - Connected to idx=x.x.x.x:9997
08-13-2016 01:27:38.629 -0400 INFO  TailReader - File descriptor cache is full (100), trimming...
08-13-2016 01:27:42.389 -0400 INFO  TailReader - File descriptor cache is full (100), trimming...
08-13-2016 01:27:46.801 -0400 INFO  TailReader - File descriptor cache is full (100), trimming...
08-13-2016 01:27:51.482 -0400 INFO  TailReader - File descriptor cache is full (100), trimming...
08-13-2016 01:27:54.734 -0400 INFO  TailReader - File descriptor cache is full (100), trimming...
08-13-2016 01:27:58.412 -0400 INFO  TailReader - File descriptor cache is full (100), trimming...
08-13-2016 01:28:02.596 -0400 INFO  TailReader - File descriptor cache is full (100), trimming...
08-13-2016 01:28:04.353 -0400 INFO  TcpOutputProc - Closing stream for idx=x.x.x.x:9997
08-13-2016 01:28:04.353 -0400 INFO  TcpOutputProc - Connected to idx=x.x.x.x:9997
08-13-2016 01:28:06.651 -0400 INFO  TailReader - File descriptor cache is full (100), trimming...
08-13-2016 01:28:10.857 -0400 INFO  TailReader - File descriptor cache is full (100), trimming...
08-13-2016 01:28:15.696 -0400 INFO  TailReader - File descriptor cache is full (100), trimming...
08-13-2016 01:28:19.354 -0400 INFO  TailReader - File descriptor cache is full (100), trimming...
08-13-2016 01:28:23.987 -0400 INFO  TailReader - File descriptor cache is full (100), trimming...
08-13-2016 01:28:28.085 -0400 INFO  TailReader - File descriptor cache is full (100), trimming...
08-13-2016 01:28:32.075 -0400 INFO  TailReader - File descriptor cache is full (100), trimming...
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;thanks in advance. &lt;/P&gt;</description>
      <pubDate>Sat, 13 Aug 2016 05:30:36 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Getting-TailReader-File-descriptor-cache-is-full-100-trimming-in/m-p/224404#M188279</guid>
      <dc:creator>Hemnaath</dc:creator>
      <dc:date>2016-08-13T05:30:36Z</dc:date>
    </item>
    <item>
      <title>Re: Getting TailReader - File descriptor cache is full (100), trimming in one of the splunk heavyforwarder ? How to fix this issue.</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Getting-TailReader-File-descriptor-cache-is-full-100-trimming-in/m-p/224405#M188280</link>
      <description>&lt;P&gt;If you run ulimit -n (or just run ulimit -a) as the user who you run Splunk as (normally the username Splunk) has the number of files hit an acceptable limit?&lt;/P&gt;

&lt;P&gt;If not then login/logout of the server &lt;EM&gt;or&lt;/EM&gt; confirm your limits.conf has been set correctly for the user running Splunk (some of the mentioned settings appear to be for the root user).&lt;BR /&gt;
For a heavy forwarder, I would consider 8192 or above to be a minimum number of file descriptors on Linux, I have the indexers set much higher.&lt;/P&gt;</description>
      <pubDate>Sun, 14 Aug 2016 06:13:01 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Getting-TailReader-File-descriptor-cache-is-full-100-trimming-in/m-p/224405#M188280</guid>
      <dc:creator>gjanders</dc:creator>
      <dc:date>2016-08-14T06:13:01Z</dc:date>
    </item>
    <item>
      <title>Re: Getting TailReader - File descriptor cache is full (100), trimming in one of the splunk heavyforwarder ? How to fix this issue.</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Getting-TailReader-File-descriptor-cache-is-full-100-trimming-in/m-p/224406#M188281</link>
      <description>&lt;P&gt;Are you running Splunk as root?&lt;/P&gt;

&lt;P&gt;Restarting the server is a good idea to ensure you have configured persistent changes. Some OS require init scripts to ensure the changes remain at startup. &lt;/P&gt;

&lt;P&gt;Also, you can tail splunkd.log to confirm the changes are seen by Splunk when it starts up. &lt;/P&gt;

&lt;P&gt;[ PROTIP : index=_internal source=*splunkd.log ulimit ] &lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;08-15-2016 05:00:53.832 -0400 WARN  main - The hard limit of 'processes/threads' is lower than the recommended value. The hard limit is: 1064. The recommended value is: 16000.
08-15-2016 05:00:53.832 -0400 WARN  main - The system fd limit (OPEN_MAX) is lower than the recommended value. The system limit (OPEN_MAX) is '10240' The recommended value is '64000'.
08-15-2016 05:00:53.832 -0400 INFO  ulimit - Limit: virtual address space size: unlimited
08-15-2016 05:00:53.832 -0400 INFO  ulimit - Limit: data segment size: unlimited
08-15-2016 05:00:53.832 -0400 INFO  ulimit - Limit: resident memory size: unlimited
08-15-2016 05:00:53.832 -0400 INFO  ulimit - Limit: stack size: 8388608 bytes [hard maximum: 67104768 bytes]
08-15-2016 05:00:53.832 -0400 INFO  ulimit - Limit: core file size: 0 bytes [hard maximum: unlimited]
08-15-2016 05:00:53.832 -0400 WARN  ulimit - Core file generation disabled
08-15-2016 05:00:53.832 -0400 INFO  ulimit - Limit: data file size: unlimited
08-15-2016 05:00:53.832 -0400 INFO  ulimit - Limit: open files: 10240 files [hard maximum: unlimited]
08-15-2016 05:00:53.832 -0400 INFO  ulimit - Limit: user processes: 709 processes
08-15-2016 05:00:53.832 -0400 INFO  ulimit - Limit: cpu time: unlimited
08-15-2016 05:00:53.833 -0400 INFO  loader - Splunkd starting (build debde650d26e).
&lt;/CODE&gt;&lt;/PRE&gt;</description>
      <pubDate>Mon, 15 Aug 2016 08:42:20 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Getting-TailReader-File-descriptor-cache-is-full-100-trimming-in/m-p/224406#M188281</guid>
      <dc:creator>mattymo</dc:creator>
      <dc:date>2016-08-15T08:42:20Z</dc:date>
    </item>
    <item>
      <title>Re: Getting TailReader - File descriptor cache is full (100), trimming in one of the splunk heavyforwarder ? How to fix this issue.</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Getting-TailReader-File-descriptor-cache-is-full-100-trimming-in/m-p/224407#M188282</link>
      <description>&lt;P&gt;Hi mmodestino, thanks for putting your effort in this issue.&lt;/P&gt;

&lt;P&gt;Yes we are using &lt;STRONG&gt;Root user&lt;/STRONG&gt; to run splunk.  &lt;/P&gt;

&lt;P&gt;We have &lt;STRONG&gt;restarted only the splunk services not the server&lt;/STRONG&gt; ?  So should we need to restart the server in order to reflect the changes. As its in production environment, need to get some approval before executing init script or restarting the server.&lt;/P&gt;

&lt;P&gt;&lt;STRONG&gt;grep ulimit splunkd.log.&lt;/STRONG&gt;*&lt;BR /&gt;
splunkd.log.1:08-12-2016 14:39:32.564 -0400 INFO  ulimit - Limit: virtual address space size: unlimited&lt;BR /&gt;
splunkd.log.1:08-12-2016 14:39:32.564 -0400 INFO  ulimit - Limit: data segment size: unlimited&lt;BR /&gt;
splunkd.log.1:08-12-2016 14:39:32.564 -0400 INFO  ulimit - Limit: resident memory size: unlimited&lt;BR /&gt;
splunkd.log.1:08-12-2016 14:39:32.564 -0400 INFO  ulimit - Limit: stack size: 10485760 bytes [hard maximum: unlimited]&lt;BR /&gt;
splunkd.log.1:08-12-2016 14:39:32.564 -0400 INFO  ulimit - Limit: core file size: 0 bytes [hard maximum: unlimited]&lt;BR /&gt;
splunkd.log.1:08-12-2016 14:39:32.564 -0400 WARN  ulimit - Core file generation disabled&lt;BR /&gt;
splunkd.log.1:08-12-2016 14:39:32.564 -0400 INFO  ulimit - Limit: data file size: unlimited&lt;BR /&gt;
splunkd.log.1:08-12-2016 14:39:32.564 -0400 INFO  ulimit - Limit: open files: 102400 files&lt;BR /&gt;
splunkd.log.1:08-12-2016 14:39:32.564 -0400 INFO  ulimit - Limit: user processes: 18000 processes&lt;BR /&gt;
splunkd.log.1:08-12-2016 14:39:32.564 -0400 INFO  ulimit - Limit: cpu time: unlimited&lt;BR /&gt;
splunkd.log.1:08-12-2016 14:39:32.564 -0400 INFO  ulimit - Linux transparent hugetables support, enabled="never" defrag="never"&lt;BR /&gt;
splunkd.log.2:08-12-2016 08:44:52.564 -0400 INFO  ulimit - Limit: virtual address space size: unlimited&lt;BR /&gt;
splunkd.log.2:08-12-2016 08:44:52.564 -0400 INFO  ulimit - Limit: data segment size: unlimited&lt;BR /&gt;
splunkd.log.2:08-12-2016 08:44:52.564 -0400 INFO  ulimit - Limit: resident memory size: unlimited&lt;BR /&gt;
splunkd.log.2:08-12-2016 08:44:52.564 -0400 INFO  ulimit - Limit: stack size: 10485760 bytes [hard maximum: unlimited]&lt;BR /&gt;
splunkd.log.2:08-12-2016 08:44:52.564 -0400 INFO  ulimit - Limit: core file size: 0 bytes [hard maximum: unlimited]&lt;BR /&gt;
splunkd.log.2:08-12-2016 08:44:52.564 -0400 WARN  ulimit - Core file generation disabled&lt;BR /&gt;
splunkd.log.2:08-12-2016 08:44:52.564 -0400 INFO  ulimit - Limit: data file size: unlimited&lt;BR /&gt;
splunkd.log.2:08-12-2016 08:44:52.564 -0400 INFO  ulimit - Limit: open files: 102400 files&lt;BR /&gt;
splunkd.log.2:08-12-2016 08:44:52.564 -0400 INFO  ulimit - Limit: user processes: 18000 processes&lt;BR /&gt;
splunkd.log.2:08-12-2016 08:44:52.564 -0400 INFO  ulimit - Limit: cpu time: unlimited&lt;BR /&gt;
splunkd.log.2:08-12-2016 08:44:52.564 -0400 INFO  ulimit - Linux transparent hugetables support, enabled="never" defrag="never"&lt;BR /&gt;
splunkd.log.3:08-11-2016 18:47:10.873 -0400 INFO  ulimit - Limit: virtual address space size: unlimited&lt;BR /&gt;
splunkd.log.3:08-11-2016 18:47:10.873 -0400 INFO  ulimit - Limit: data segment size: unlimited&lt;BR /&gt;
splunkd.log.3:08-11-2016 18:47:10.873 -0400 INFO  ulimit - Limit: resident memory size: unlimited&lt;BR /&gt;
splunkd.log.3:08-11-2016 18:47:10.873 -0400 INFO  ulimit - Limit: stack size: 10485760 bytes [hard maximum: unlimited]&lt;BR /&gt;
splunkd.log.3:08-11-2016 18:47:10.873 -0400 INFO  ulimit - Limit: core file size: 0 bytes [hard maximum: unlimited]&lt;BR /&gt;
splunkd.log.3:08-11-2016 18:47:10.873 -0400 WARN  ulimit - Core file generation disabled&lt;BR /&gt;
splunkd.log.3:08-11-2016 18:47:10.873 -0400 INFO  ulimit - Limit: data file size: unlimited&lt;BR /&gt;
splunkd.log.3:08-11-2016 18:47:10.873 -0400 INFO  ulimit - Limit: open files: 4096 files&lt;BR /&gt;
splunkd.log.3:08-11-2016 18:47:10.873 -0400 INFO  ulimit - Limit: user processes: 63681 processes&lt;BR /&gt;
splunkd.log.3:08-11-2016 18:47:10.873 -0400 INFO  ulimit - Limit: cpu time: unlimited&lt;BR /&gt;
splunkd.log.3:08-11-2016 18:47:10.873 -0400 INFO  ulimit - Linux transparent hugetables support, enabled="always" defrag="always"&lt;BR /&gt;
splunkd.log.3:08-11-2016 23:22:48.038 -0400 INFO  ulimit - Limit: virtual address space size: unlimited&lt;BR /&gt;
splunkd.log.3:08-11-2016 23:22:48.038 -0400 INFO  ulimit - Limit: data segment size: unlimited&lt;BR /&gt;
splunkd.log.3:08-11-2016 23:22:48.038 -0400 INFO  ulimit - Limit: resident memory size: unlimited&lt;BR /&gt;
splunkd.log.3:08-11-2016 23:22:48.038 -0400 INFO  ulimit - Limit: stack size: 10485760 bytes [hard maximum: unlimited]&lt;BR /&gt;
splunkd.log.3:08-11-2016 23:22:48.038 -0400 INFO  ulimit - Limit: core file size: 0 bytes [hard maximum: unlimited]&lt;BR /&gt;
splunkd.log.3:08-11-2016 23:22:48.038 -0400 WARN  ulimit - Core file generation disabled&lt;BR /&gt;
splunkd.log.3:08-11-2016 23:22:48.038 -0400 INFO  ulimit - Limit: data file size: unlimited&lt;BR /&gt;
splunkd.log.3:08-11-2016 23:22:48.038 -0400 INFO  ulimit - Limit: open files: 4096 files&lt;BR /&gt;
splunkd.log.3:08-11-2016 23:22:48.038 -0400 INFO  ulimit - Limit: user processes: 63681 processes&lt;BR /&gt;
splunkd.log.3:08-11-2016 23:22:48.038 -0400 INFO  ulimit - Limit: cpu time: unlimited&lt;BR /&gt;
splunkd.log.3:08-11-2016 23:22:48.038 -0400 INFO  ulimit - Linux transparent hugetables support, enabled="never" defrag="never"&lt;BR /&gt;
splunkd.log.4:08-11-2016 13:12:52.508 -0400 INFO  ulimit - Limit: virtual address space size: unlimited&lt;BR /&gt;
splunkd.log.4:08-11-2016 13:12:52.508 -0400 INFO  ulimit - Limit: data segment size: unlimited&lt;BR /&gt;
splunkd.log.4:08-11-2016 13:12:52.508 -0400 INFO  ulimit - Limit: resident memory size: unlimited&lt;BR /&gt;
splunkd.log.4:08-11-2016 13:12:52.508 -0400 INFO  ulimit - Limit: stack size: 10485760 bytes [hard maximum: unlimited]&lt;BR /&gt;
splunkd.log.4:08-11-2016 13:12:52.508 -0400 INFO  ulimit - Limit: core file size: 0 bytes [hard maximum: unlimited]&lt;BR /&gt;
splunkd.log.4:08-11-2016 13:12:52.508 -0400 WARN  ulimit - Core file generation disabled&lt;BR /&gt;
splunkd.log.4:08-11-2016 13:12:52.508 -0400 INFO  ulimit - Limit: data file size: unlimited&lt;BR /&gt;
splunkd.log.4:08-11-2016 13:12:52.508 -0400 INFO  ulimit - Limit: open files: 4096 files&lt;BR /&gt;
splunkd.log.4:08-11-2016 13:12:52.508 -0400 INFO  ulimit - Limit: user processes: 63681 processes&lt;BR /&gt;
splunkd.log.4:08-11-2016 13:12:52.508 -0400 INFO  ulimit - Limit: cpu time: unlimited&lt;BR /&gt;
splunkd.log.4:08-11-2016 13:12:52.508 -0400 INFO  ulimit - Linux transparent hugetables support, enabled="always" defrag="always"&lt;BR /&gt;
splunkd.log.5:08-10-2016 23:36:48.825 -0400 INFO  ulimit - Limit: virtual address space size: unlimited&lt;BR /&gt;
splunkd.log.5:08-10-2016 23:36:48.825 -0400 INFO  ulimit - Limit: data segment size: unlimited&lt;BR /&gt;
splunkd.log.5:08-10-2016 23:36:48.825 -0400 INFO  ulimit - Limit: resident memory size: unlimited&lt;BR /&gt;
splunkd.log.5:08-10-2016 23:36:48.825 -0400 INFO  ulimit - Limit: stack size: 10485760 bytes [hard maximum: unlimited]&lt;BR /&gt;
splunkd.log.5:08-10-2016 23:36:48.825 -0400 INFO  ulimit - Limit: core file size: 0 bytes [hard maximum: unlimited]&lt;BR /&gt;
splunkd.log.5:08-10-2016 23:36:48.825 -0400 WARN  ulimit - Core file generation disabled&lt;BR /&gt;
splunkd.log.5:08-10-2016 23:36:48.825 -0400 INFO  ulimit - Limit: data file size: unlimited&lt;BR /&gt;
splunkd.log.5:08-10-2016 23:36:48.825 -0400 INFO  ulimit - Limit: open files: 4096 files&lt;BR /&gt;
splunkd.log.5:08-10-2016 23:36:48.825 -0400 INFO  ulimit - Limit: user processes: 63681 processes&lt;BR /&gt;
splunkd.log.5:08-10-2016 23:36:48.825 -0400 INFO  ulimit - Limit: cpu time: unlimited&lt;BR /&gt;
splunkd.log.5:08-10-2016 23:36:48.825 -0400 INFO  ulimit - Linux transparent hugetables support, enabled="always" defrag="always"&lt;BR /&gt;
splunkd.log.5:08-11-2016 08:41:56.173 -0400 INFO  ulimit - Limit: virtual address space size: unlimited&lt;BR /&gt;
splunkd.log.5:08-11-2016 08:41:56.173 -0400 INFO  ulimit - Limit: data segment size: unlimited&lt;BR /&gt;
splunkd.log.5:08-11-2016 08:41:56.173 -0400 INFO  ulimit - Limit: resident memory size: unlimited&lt;BR /&gt;
splunkd.log.5:08-11-2016 08:41:56.173 -0400 INFO  ulimit - Limit: stack size: 10485760 bytes [hard maximum: unlimited]&lt;BR /&gt;
splunkd.log.5:08-11-2016 08:41:56.173 -0400 INFO  ulimit - Limit: core file size: 0 bytes [hard maximum: unlimited]&lt;BR /&gt;
splunkd.log.5:08-11-2016 08:41:56.173 -0400 WARN  ulimit - Core file generation disabled&lt;BR /&gt;
splunkd.log.5:08-11-2016 08:41:56.173 -0400 INFO  ulimit - Limit: data file size: unlimited&lt;BR /&gt;
splunkd.log.5:08-11-2016 08:41:56.173 -0400 INFO  ulimit - Limit: open files: 4096 files&lt;BR /&gt;
splunkd.log.5:08-11-2016 08:41:56.173 -0400 INFO  ulimit - Limit: user processes: 63681 processes&lt;BR /&gt;
splunkd.log.5:08-11-2016 08:41:56.173 -0400 INFO  ulimit - Limit: cpu time: unlimited&lt;BR /&gt;
splunkd.log.5:08-11-2016 08:41:56.174 -0400 INFO  ulimit - Linux transparent hugetables support, enabled="always" defrag="always"&lt;/P&gt;

&lt;P&gt;kindly let me know is there a fix without restarting the server ? thanks in advance.&lt;/P&gt;</description>
      <pubDate>Thu, 18 Aug 2016 12:47:40 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Getting-TailReader-File-descriptor-cache-is-full-100-trimming-in/m-p/224407#M188282</guid>
      <dc:creator>Hemnaath</dc:creator>
      <dc:date>2016-08-18T12:47:40Z</dc:date>
    </item>
    <item>
      <title>Re: Getting TailReader - File descriptor cache is full (100), trimming in one of the splunk heavyforwarder ? How to fix this issue.</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Getting-TailReader-File-descriptor-cache-is-full-100-trimming-in/m-p/224408#M188283</link>
      <description>&lt;P&gt;Hey! Looks like splunk took your changes, no need to restart the server, i was just advising you ensure they persist if it does restart.&lt;/P&gt;

&lt;P&gt;If you go to settings &amp;gt; inputs, did you set your number high enough? I quickly ended up needing a cronjob to reap the old files I didn't want splunk to monitor. (one time ingestion)&lt;/P&gt;</description>
      <pubDate>Thu, 18 Aug 2016 21:41:34 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Getting-TailReader-File-descriptor-cache-is-full-100-trimming-in/m-p/224408#M188283</guid>
      <dc:creator>mattymo</dc:creator>
      <dc:date>2016-08-18T21:41:34Z</dc:date>
    </item>
    <item>
      <title>Re: Getting TailReader - File descriptor cache is full (100), trimming in one of the splunk heavyforwarder ? How to fix this issue.</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Getting-TailReader-File-descriptor-cache-is-full-100-trimming-in/m-p/224409#M188284</link>
      <description>&lt;P&gt;It is unlikely that you need to change any system settings.  The default number files opened at once by Splunk is 100 files.  If you have many more files being monitored you may need to increase this limit in Splunk.  &lt;/P&gt;

&lt;P&gt;If you've just brought the forwarder online then this may be temporary as Splunk processes all the historical files.  So you can consider raising this value depending on your situation.&lt;/P&gt;

&lt;P&gt;in limits.conf&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;[inputproc]

max_fd = &amp;lt;integer&amp;gt;
* Maximum number of file descriptors that Splunk will keep open, to capture any
  trailing data from files that are written to very slowly.
* Defaults to 100.
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;Yes, changing this limit will require a Splunk restart (not system restart).&lt;/P&gt;</description>
      <pubDate>Thu, 18 Aug 2016 23:10:29 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Getting-TailReader-File-descriptor-cache-is-full-100-trimming-in/m-p/224409#M188284</guid>
      <dc:creator>the_wolverine</dc:creator>
      <dc:date>2016-08-18T23:10:29Z</dc:date>
    </item>
    <item>
      <title>Re: Getting TailReader - File descriptor cache is full (100), trimming in one of the splunk heavyforwarder ? How to fix this issue.</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Getting-TailReader-File-descriptor-cache-is-full-100-trimming-in/m-p/224410#M188285</link>
      <description>&lt;P&gt;Network connections also count as file descriptors at the Unix OS level so your ulimit's won't exactly line up with the max_fd within the limits.conf file.&lt;/P&gt;

&lt;P&gt;During previous support cases Splunk has advised there is no harm in raising the limit, we have the limit set to 1000 on many forwarders without an issue, our OS level limits for the universal forwarder is generally 8192 file descriptors. &lt;/P&gt;

&lt;P&gt;8192 is the recommended minimum for an enterprise installation, we apply this setting to universal forwarders. FYI we found turning on SSL drastically increased the OS level file descriptor usage by universal forwarders.&lt;/P&gt;

&lt;P&gt;&lt;A href="http://docs.splunk.com/Documentation/Splunk/6.4.2/Installation/Systemrequirements#Considerations_regarding_file_descriptor_limits_.28FDs.29_on_.2Anix_systems"&gt;http://docs.splunk.com/Documentation/Splunk/6.4.2/Installation/Systemrequirements#Considerations_regarding_file_descriptor_limits_.28FDs.29_on_.2Anix_systems&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 19 Aug 2016 00:22:51 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Getting-TailReader-File-descriptor-cache-is-full-100-trimming-in/m-p/224410#M188285</guid>
      <dc:creator>gjanders</dc:creator>
      <dc:date>2016-08-19T00:22:51Z</dc:date>
    </item>
    <item>
      <title>Re: Getting TailReader - File descriptor cache is full (100), trimming in one of the splunk heavyforwarder ? How to fix this issue.</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Getting-TailReader-File-descriptor-cache-is-full-100-trimming-in/m-p/224411#M188286</link>
      <description>&lt;P&gt;thanks mmodestino. But I did not understand the below sentence from your comment.&lt;/P&gt;

&lt;P&gt;"If you go to settings &amp;gt; inputs, did you set your number high enough? I quickly ended up needing a cronjob to reap the old files I didn't want splunk to monitor. (one time ingestion)"&lt;/P&gt;

&lt;P&gt;R u asking about the limits.conf file details ? &lt;/P&gt;

&lt;P&gt;this is the cronjob we have placed in the server to delete the large files&lt;/P&gt;

&lt;P&gt;00,30 * * * * /bin/find /opt/syslogs/generic -mtime +1 -type f -delete &amp;gt; /dev/null 2&amp;gt;&amp;amp;1&lt;BR /&gt;
00,30 * * * * /bin/find /opt/syslogs/web_access -mtime +1 -type f -delete &amp;gt; /dev/null 2&amp;gt;&amp;amp;1&lt;/P&gt;

&lt;P&gt;so kindly guide us to fix this issue. thanks in advance.&lt;/P&gt;</description>
      <pubDate>Fri, 19 Aug 2016 13:59:15 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Getting-TailReader-File-descriptor-cache-is-full-100-trimming-in/m-p/224411#M188286</guid>
      <dc:creator>Hemnaath</dc:creator>
      <dc:date>2016-08-19T13:59:15Z</dc:date>
    </item>
    <item>
      <title>Re: Getting TailReader - File descriptor cache is full (100), trimming in one of the splunk heavyforwarder ? How to fix this issue.</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Getting-TailReader-File-descriptor-cache-is-full-100-trimming-in/m-p/224412#M188287</link>
      <description>&lt;P&gt;thanks wolverine for putting some effort in this issue, this is the limits.conf detail which is the in the HF server&lt;/P&gt;

&lt;P&gt;/opt/splunk/etc/apps/all_indexer_base/local/limits.conf&lt;BR /&gt;
[search]&lt;BR /&gt;
max_rawsize_perchunk = 314572800&lt;/P&gt;

&lt;P&gt;/opt/splunk/etc/system/default/limits.conf&lt;/P&gt;

&lt;P&gt;[inputproc]&lt;/P&gt;

&lt;H1&gt;Threshold size (in mb) to trigger fishbucket rolling to a new db&lt;/H1&gt;

&lt;P&gt;file_tracking_db_threshold_mb = 500&lt;/P&gt;

&lt;H1&gt;Approximate ceiling on sourcetypes &amp;amp; fingerprints in learned app&lt;/H1&gt;

&lt;P&gt;learned_sourcetypes_limit = 1000&lt;/P&gt;

&lt;H1&gt;Maximum size (in mb) of heap allowed to be created by Splunk modular input Mon&lt;/H1&gt;

&lt;P&gt;itorNoHandle.&lt;/P&gt;

&lt;P&gt;monitornohandle_max_heap_mb = 0&lt;/P&gt;

&lt;P&gt;do you want us to add this stanza ? and what value should be provided in the stanza.&lt;/P&gt;

&lt;P&gt;[inputproc] &lt;BR /&gt;
 max_fd =  &lt;/P&gt;

&lt;P&gt;thanks in advance.&lt;/P&gt;</description>
      <pubDate>Tue, 29 Sep 2020 10:42:25 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Getting-TailReader-File-descriptor-cache-is-full-100-trimming-in/m-p/224412#M188287</guid>
      <dc:creator>Hemnaath</dc:creator>
      <dc:date>2020-09-29T10:42:25Z</dc:date>
    </item>
    <item>
      <title>Re: Getting TailReader - File descriptor cache is full (100), trimming in one of the splunk heavyforwarder ? How to fix this issue.</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Getting-TailReader-File-descriptor-cache-is-full-100-trimming-in/m-p/224413#M188288</link>
      <description>&lt;P&gt;I was eluding to you checking how many files your inputs are monitoring.&lt;/P&gt;

&lt;P&gt;if you are monitoring entire directories, for example /home/stats/servers/*&lt;/P&gt;

&lt;P&gt;You can check under settings &amp;gt; inputs &amp;gt; files and directories, splunk will show you how many files are being monitored&lt;/P&gt;

&lt;P&gt;obviously this depends on what kind of inputs you are using.&lt;/P&gt;</description>
      <pubDate>Sat, 20 Aug 2016 15:41:27 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Getting-TailReader-File-descriptor-cache-is-full-100-trimming-in/m-p/224413#M188288</guid>
      <dc:creator>mattymo</dc:creator>
      <dc:date>2016-08-20T15:41:27Z</dc:date>
    </item>
    <item>
      <title>Re: Getting TailReader - File descriptor cache is full (100), trimming in one of the splunk heavyforwarder ? How to fix this issue.</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Getting-TailReader-File-descriptor-cache-is-full-100-trimming-in/m-p/224414#M188289</link>
      <description>&lt;P&gt;also, re-reading your initial question, I wonder if searching _internal is not really what you want.&lt;/P&gt;

&lt;P&gt;There is a rest endpoint that you can use to find out what the tailing processor is doing.&lt;/P&gt;

&lt;P&gt;read about it here:&lt;/P&gt;

&lt;P&gt;&lt;A href="http://blogs.splunk.com/2011/01/02/did-i-miss-christmas-2/"&gt;http://blogs.splunk.com/2011/01/02/did-i-miss-christmas-2/&lt;/A&gt;&lt;/P&gt;

&lt;P&gt;Are you actually missing any data? or maybe we're just down a rabbit hole?&lt;/P&gt;</description>
      <pubDate>Sat, 20 Aug 2016 16:19:46 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Getting-TailReader-File-descriptor-cache-is-full-100-trimming-in/m-p/224414#M188289</guid>
      <dc:creator>mattymo</dc:creator>
      <dc:date>2016-08-20T16:19:46Z</dc:date>
    </item>
    <item>
      <title>Re: Getting TailReader - File descriptor cache is full (100), trimming in one of the splunk heavyforwarder ? How to fix this issue.</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Getting-TailReader-File-descriptor-cache-is-full-100-trimming-in/m-p/224415#M188290</link>
      <description>&lt;P&gt;I'm using max_fd = 1000&lt;BR /&gt;
You can tune based on your environments requirements...&lt;/P&gt;</description>
      <pubDate>Sun, 21 Aug 2016 22:51:46 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Getting-TailReader-File-descriptor-cache-is-full-100-trimming-in/m-p/224415#M188290</guid>
      <dc:creator>gjanders</dc:creator>
      <dc:date>2016-08-21T22:51:46Z</dc:date>
    </item>
    <item>
      <title>Re: Getting TailReader - File descriptor cache is full (100), trimming in one of the splunk heavyforwarder ? How to fix this issue.</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Getting-TailReader-File-descriptor-cache-is-full-100-trimming-in/m-p/224416#M188291</link>
      <description>&lt;P&gt;thanks for all the people who guide me on this issue, after &lt;STRONG&gt;changing the limits.conf&lt;/STRONG&gt; file and restarted the service issue got fixed.&lt;/P&gt;

&lt;P&gt;path = &lt;STRONG&gt;/opt/splunk/etc/apps/yourapp/local/limits.conf&lt;/STRONG&gt;&lt;/P&gt;

&lt;P&gt;&lt;STRONG&gt;stanza &lt;BR /&gt;
[inputproc] &lt;BR /&gt;
 max_fd = 1000&lt;/STRONG&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 23 Aug 2016 14:13:07 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Getting-TailReader-File-descriptor-cache-is-full-100-trimming-in/m-p/224416#M188291</guid>
      <dc:creator>Hemnaath</dc:creator>
      <dc:date>2016-08-23T14:13:07Z</dc:date>
    </item>
  </channel>
</rss>

