<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: The index process has paused data flow. Too many tsidx files in Getting Data In</title>
    <link>https://community.splunk.com/t5/Getting-Data-In/Why-has-the-index-process-paused-data-flow-How-to-handle-too/m-p/629804#M107989</link>
    <description>&lt;P&gt;Ok you mentioned that in your other post.&lt;/P&gt;&lt;BLOCKQUOTE&gt;&lt;P&gt;We are still facing the following issue when we put in maintenance mode our Indexer Cluster and we stop one Indexer.&lt;/P&gt;&lt;P&gt;Basically all the Indexers stop ingesting data, increasing their queues, waiting for splunk-optimize to finish the job.&lt;/P&gt;&lt;P&gt;This usually happens when we stop the Indexer after a long time since last time.&lt;/P&gt;&lt;BR /&gt;&lt;P&gt;&lt;A href="https://community.splunk.com/t5/Splunk-Enterprise/The-index-processor-has-paused-data-flow-How-to-optimize/m-p/629520" target="_blank" rel="noopener"&gt;https://community.splunk.com/t5/Splunk-Enterprise/The-index-processor-has-paused-data-flow-How-to-optimize/m-p/629520&lt;/A&gt;&lt;/P&gt;&lt;P&gt;Do you suggest increasing&amp;nbsp;&lt;SPAN&gt;maxRunningProcessGroups?&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;HR /&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;Stopping one or few indexers causes indexqueue blocked across several indexers. At the same time you see&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;throttled: The index processor has paused data flow. Too many tsidx files&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;across several indexers.&lt;BR /&gt;&lt;BR /&gt;If this is the case where it takes long time for indexqueue to unblock and indexing throttle to go way.&lt;BR /&gt;&lt;BR /&gt;Try following workaround to reduce outage.&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN class=""&gt;In server.conf&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;[queue=indexQueue]&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;maxSize=500MB&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;In indexes.conf&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;[default]&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;throttleCheckPeriod=5&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;maxConcurrentOptimizes=1&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;maxRunningProcessGroups=32&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;processTrackerServiceInterval=0&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;</description>
    <pubDate>Mon, 06 Feb 2023 22:38:45 GMT</pubDate>
    <dc:creator>hrawat</dc:creator>
    <dc:date>2023-02-06T22:38:45Z</dc:date>
    <item>
      <title>Why has the index process paused data flow? How to handle too many tsidx files?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-has-the-index-process-paused-data-flow-How-to-handle-too/m-p/524522#M88548</link>
      <description>&lt;P&gt;This issue happens when incoming thruput for hotbuckets is faster than splunk optimize can merge tsidx files and&amp;nbsp; &amp;nbsp;keep the count &amp;lt; 100(hardcoded). If number of tsidx files per hotbucket are &amp;gt;=100, then indexer will apply indexing pause to allow splunk-optimize catch up.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 07 Feb 2023 02:07:27 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-has-the-index-process-paused-data-flow-How-to-handle-too/m-p/524522#M88548</guid>
      <dc:creator>hrawat</dc:creator>
      <dc:date>2023-02-07T02:07:27Z</dc:date>
    </item>
    <item>
      <title>Re: The index process has paused data flow. Too many tsidx files</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-has-the-index-process-paused-data-flow-How-to-handle-too/m-p/524523#M88549</link>
      <description>&lt;P&gt;Post 7.2 onwards following config should fix the issue&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P class=""&gt;&lt;SPAN class=""&gt;In indexes.conf set&lt;/SPAN&gt;&lt;/P&gt;&lt;P class=""&gt;&lt;SPAN class=""&gt;[default]&lt;/SPAN&gt;&lt;/P&gt;&lt;P class=""&gt;&lt;SPAN class=""&gt;maxRunningProcessGroups=12&lt;/SPAN&gt;&lt;/P&gt;&lt;P class=""&gt;&lt;SPAN class=""&gt;processTrackerServiceInterval=0&lt;BR /&gt;&lt;BR /&gt;Update (11/16/2022) If the issue is still not resolved, increase&amp;nbsp;&lt;STRONG&gt;maxRunningProcessGroups&lt;/STRONG&gt; setting.&lt;BR /&gt;&lt;U&gt;&lt;STRONG&gt;For future splunk 9.1 release and splunk cloud releases, the workaround is not needed as the issue is fixed now.&lt;/STRONG&gt;&lt;/U&gt;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 16 Nov 2022 20:48:56 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-has-the-index-process-paused-data-flow-How-to-handle-too/m-p/524523#M88549</guid>
      <dc:creator>hrawat</dc:creator>
      <dc:date>2022-11-16T20:48:56Z</dc:date>
    </item>
    <item>
      <title>Re: The index process has paused data flow. Too many tsidx files</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-has-the-index-process-paused-data-flow-How-to-handle-too/m-p/629797#M107986</link>
      <description>&lt;P&gt;&amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;a href="https://community.splunk.com/t5/user/viewprofilepage/user-id/118813"&gt;@hrawat&lt;/a&gt;&amp;nbsp;thanks for the update&lt;/P&gt;&lt;P&gt;we have the same exact issue. I see you mentioned it has been fixed with 9.1, do you mean 9.0.1? The latest available is 9.0.3&lt;/P&gt;&lt;P&gt;We are on prem with 9.0.2 and still facing it despite we already put the indicated set-up in indexes.conf see my question here&lt;/P&gt;&lt;P&gt;&lt;A href="https://community.splunk.com/t5/Splunk-Enterprise/The-index-processor-has-paused-data-flow-How-to-optimize/m-p/629520" target="_blank"&gt;https://community.splunk.com/t5/Splunk-Enterprise/The-index-processor-has-paused-data-flow-How-to-optimize/m-p/629520&lt;/A&gt;&lt;/P&gt;&lt;P&gt;Do you suggest increasing&amp;nbsp;&lt;SPAN&gt;maxRunningProcessGroups?&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 06 Feb 2023 21:35:57 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-has-the-index-process-paused-data-flow-How-to-handle-too/m-p/629797#M107986</guid>
      <dc:creator>edoardo_vicendo</dc:creator>
      <dc:date>2023-02-06T21:35:57Z</dc:date>
    </item>
    <item>
      <title>Re: The index process has paused data flow. Too many tsidx files</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-has-the-index-process-paused-data-flow-How-to-handle-too/m-p/629802#M107988</link>
      <description>&lt;P&gt;Fix will be in next major release 9.1.&amp;nbsp;&lt;BR /&gt;There are multiple reasons for indexing pause.&lt;/P&gt;&lt;P&gt;Do you see this on all indexers all the time?&lt;/P&gt;&lt;P&gt;Do you see this on few indexers at a time but moves around?&lt;BR /&gt;Do you see this issue only when few indexers are restarted?&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 06 Feb 2023 22:24:51 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-has-the-index-process-paused-data-flow-How-to-handle-too/m-p/629802#M107988</guid>
      <dc:creator>hrawat</dc:creator>
      <dc:date>2023-02-06T22:24:51Z</dc:date>
    </item>
    <item>
      <title>Re: The index process has paused data flow. Too many tsidx files</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-has-the-index-process-paused-data-flow-How-to-handle-too/m-p/629804#M107989</link>
      <description>&lt;P&gt;Ok you mentioned that in your other post.&lt;/P&gt;&lt;BLOCKQUOTE&gt;&lt;P&gt;We are still facing the following issue when we put in maintenance mode our Indexer Cluster and we stop one Indexer.&lt;/P&gt;&lt;P&gt;Basically all the Indexers stop ingesting data, increasing their queues, waiting for splunk-optimize to finish the job.&lt;/P&gt;&lt;P&gt;This usually happens when we stop the Indexer after a long time since last time.&lt;/P&gt;&lt;BR /&gt;&lt;P&gt;&lt;A href="https://community.splunk.com/t5/Splunk-Enterprise/The-index-processor-has-paused-data-flow-How-to-optimize/m-p/629520" target="_blank" rel="noopener"&gt;https://community.splunk.com/t5/Splunk-Enterprise/The-index-processor-has-paused-data-flow-How-to-optimize/m-p/629520&lt;/A&gt;&lt;/P&gt;&lt;P&gt;Do you suggest increasing&amp;nbsp;&lt;SPAN&gt;maxRunningProcessGroups?&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;HR /&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;Stopping one or few indexers causes indexqueue blocked across several indexers. At the same time you see&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;throttled: The index processor has paused data flow. Too many tsidx files&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;across several indexers.&lt;BR /&gt;&lt;BR /&gt;If this is the case where it takes long time for indexqueue to unblock and indexing throttle to go way.&lt;BR /&gt;&lt;BR /&gt;Try following workaround to reduce outage.&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN class=""&gt;In server.conf&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;[queue=indexQueue]&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;maxSize=500MB&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;In indexes.conf&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;[default]&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;throttleCheckPeriod=5&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;maxConcurrentOptimizes=1&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;maxRunningProcessGroups=32&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;processTrackerServiceInterval=0&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 06 Feb 2023 22:38:45 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-has-the-index-process-paused-data-flow-How-to-handle-too/m-p/629804#M107989</guid>
      <dc:creator>hrawat</dc:creator>
      <dc:date>2023-02-06T22:38:45Z</dc:date>
    </item>
    <item>
      <title>Re: The index process has paused data flow. Too many tsidx files</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-has-the-index-process-paused-data-flow-How-to-handle-too/m-p/631226#M108187</link>
      <description>&lt;P&gt;Hello&amp;nbsp;&lt;a href="https://community.splunk.com/t5/user/viewprofilepage/user-id/118813"&gt;@hrawat&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Really thanks for your reply.&lt;/P&gt;&lt;P&gt;We applied the suggested configuration both on server.conf and indexes.conf&lt;/P&gt;&lt;P&gt;So basically from my understanding the aim is to check for&amp;nbsp;index throttling more often (throttleCheckPeriod=5), with only 1 splunk-optimize running over a bucket (maxConcurrentOptimizes=1), spawning several child processes over it (maxRunningProcessGroups=32) and checking every second if any other child process can be launched (processTrackerServiceInterval=0). Therefore the purpose is to concentrate all the optimize resources on a single bucket per time.&lt;/P&gt;&lt;P&gt;What we observed is the following:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;if we put the Cluster Master in maintenance and stop an Indexer we do not see anymore the messages on the Monitoring Console. The Indexing Queue fill anyway to 100% on all remaining Indexers even if we have increase the size to 500MB, the Indexing Rate decrease from 15-20 MB/s to 1MB/s, but in short time (approximately few minutes) the problem is solved&lt;/LI&gt;&lt;LI&gt;The side effect we observed is that since we applied the modification we see in Indexers _internal index the following messages (20-30 events per hour):&lt;/LI&gt;&lt;/UL&gt;&lt;LI-CODE lang="markup"&gt;02-16-2023 17:15:33.556 +0100 INFO  HealthChangeReporter - feature="Index Optimization" indicator="concurrent_optimize_processes_percent" previous_color=green color=yellow due_to_threshold_value=100 measured_value=1 reason="The number of splunk optimize processes is at 100% of the maximum. As a result, the index processor has paused data flow."

02-16-2023 17:15:47.753 +0100 INFO  PeriodicHealthReporter - feature="Index Optimization" color=yellow indicator="concurrent_optimize_processes_percent" due_to_threshold_value=100 measured_value=1 reason="The number of splunk optimize processes is at 100% of the maximum. As a result, the index processor has paused data flow." node_type=indicator node_path=splunkd.index_processor.index_optimization.concurrent_optimize_processes_percent&lt;/LI-CODE&gt;&lt;UL&gt;&lt;LI&gt;And we also see the following "original" message (1-2 events per day):&lt;/LI&gt;&lt;/UL&gt;&lt;LI-CODE lang="markup"&gt;02-16-2023 14:45:38.658 +0100 INFO  IndexWriter [12974 indexerPipe] - The index processor has paused data flow. Too many tsidx files in idx=_internal bucket="/xxxxxxx/xxxx/xxxxxxxxxx/splunk/db/_internaldb/db/hot_v1_1928" , waiting for the splunk-optimize indexing helper to catch up merging them. Ensure reasonable disk space is available, and that I/O write throughput is not compromised.&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;It seems to me the direction you given us is the correct one to solve the problem, in fact now it is fine if the Cluster takes just few minutes to recover, before was much longer. What we would like to improve is to avoid during the normal running the&amp;nbsp;PeriodicHealthReporter and&amp;nbsp;HealthChangeReporter messages that inform us the indexing has stopped.&lt;/P&gt;&lt;P&gt;Do you think that we can increase the&amp;nbsp;maxConcurrentOptimizes value to avoid that?&lt;/P&gt;&lt;P&gt;I think in this way we could better balance the "brute force" over more buckets, probably we will loose something when an Indexer is stopped but we gain in normal running.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;For reference here the indexes.conf specification:&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;throttleCheckPeriod = &amp;lt;positive integer&amp;gt;
* How frequently, in seconds, that splunkd checks for index throttling
  conditions.
* NOTE: Do not change this setting unless a Splunk Support
  professional asks you to.
* The highest legal value is 4294967295.
* Default: 15&lt;/LI-CODE&gt;&lt;LI-CODE lang="markup"&gt;maxConcurrentOptimizes = &amp;lt;nonnegative integer&amp;gt;
* The number of concurrent optimize processes that can run against a hot
  bucket.
* This number should be increased if:
  * There are always many small tsidx files in the hot bucket.
  * After rolling, there are many tsidx files in warm or cold buckets.
* You must restart splunkd after changing this setting. Reloading the
  configuration does not suffice.
* The highest legal value is 4294967295.
* Default: 6&lt;/LI-CODE&gt;&lt;LI-CODE lang="markup"&gt;maxRunningProcessGroups = &amp;lt;positive integer&amp;gt;
* splunkd runs helper child processes like "splunk-optimize",
  "recover-metadata", etc. This setting limits how many child processes
  can run at any given time.
* This maximum applies to all of splunkd, not per index. If you have N
  indexes, there will be at most 'maxRunningProcessGroups' child processes,
  not N * 'maxRunningProcessGroups' processes.
* Must maintain maxRunningProcessGroupsLowPriority &amp;lt; maxRunningProcessGroups
* This is an advanced setting; do NOT set unless instructed by Splunk
  Support.
* Highest legal value is 4294967295.
* Default: 8&lt;/LI-CODE&gt;&lt;LI-CODE lang="markup"&gt;processTrackerServiceInterval = &amp;lt;nonnegative integer&amp;gt;
* How often, in seconds, the indexer checks the status of the child OS
  processes it has launched to see if it can launch new processes for queued
  requests.
* If set to 0, the indexer checks child process status every second.
* Highest legal value is 4294967295.
* Default: 15&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks a lot,&lt;/P&gt;&lt;P&gt;Edoardo&lt;/P&gt;</description>
      <pubDate>Thu, 16 Feb 2023 17:34:39 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-has-the-index-process-paused-data-flow-How-to-handle-too/m-p/631226#M108187</guid>
      <dc:creator>edoardo_vicendo</dc:creator>
      <dc:date>2023-02-16T17:34:39Z</dc:date>
    </item>
    <item>
      <title>Re: The index process has paused data flow. Too many tsidx files</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-has-the-index-process-paused-data-flow-How-to-handle-too/m-p/633554#M108467</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.splunk.com/t5/user/viewprofilepage/user-id/118813"&gt;@hrawat&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;We ended up with this configuration:&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;In server.conf
[queue=indexQueue]
maxSize=500MB

In indexes.conf
[default]
throttleCheckPeriod=5
maxConcurrentOptimizes=2
maxRunningProcessGroups=32 
processTrackerServiceInterval=0&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;In this way we have both the benefits:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;if we restart the cluster we don't have anymore the &lt;EM&gt;&lt;STRONG&gt;IndexWriter&lt;/STRONG&gt;&lt;/EM&gt; message&lt;/LI&gt;&lt;LI&gt;during the normal running we don't have the&amp;nbsp;&lt;EM&gt;&lt;STRONG&gt;HealthChangeReporter&lt;/STRONG&gt; &lt;/EM&gt;OR &lt;EM&gt;&lt;STRONG&gt;PeriodicHealthReporter&lt;/STRONG&gt; &lt;/EM&gt;messages anymore&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;Thanks a lot for your suggestion!&lt;/P&gt;</description>
      <pubDate>Tue, 07 Mar 2023 12:47:07 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-has-the-index-process-paused-data-flow-How-to-handle-too/m-p/633554#M108467</guid>
      <dc:creator>edoardo_vicendo</dc:creator>
      <dc:date>2023-03-07T12:47:07Z</dc:date>
    </item>
    <item>
      <title>Re: The index process has paused data flow. Too many tsidx files</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-has-the-index-process-paused-data-flow-How-to-handle-too/m-p/690547#M114893</link>
      <description>&lt;P&gt;I have a new deployment of Splunk 9.2.1 Enterprise. We only have the Splunk servers running so far, other than one Universal Forwarder. I'm getting this error:&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;The index processor has paused data flow. Too many tsidx files in idx=_internal bucket="/opt/splunk/var/lib/splunk/_internaldb/db/hot_v1_57" , waiting for the splunk-optimize indexing helper to catch up merging them. Ensure reasonable disk space is available, and that I/O write throughput is not compromised.&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;I have 4TB of available disk space, so I have no idea what's going on. Any thoughts?&lt;/P&gt;</description>
      <pubDate>Wed, 12 Jun 2024 21:15:18 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-has-the-index-process-paused-data-flow-How-to-handle-too/m-p/690547#M114893</guid>
      <dc:creator>mommyfixit</dc:creator>
      <dc:date>2024-06-12T21:15:18Z</dc:date>
    </item>
    <item>
      <title>Re: The index process has paused data flow. Too many tsidx files</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-has-the-index-process-paused-data-flow-How-to-handle-too/m-p/690548#M114894</link>
      <description>&lt;P&gt;The log message is bit generic.&amp;nbsp;&lt;BR /&gt;The reason for this message is that on that indexer too many internal index log events arrived and as a result there are already 100+ tsidx files for that hot bucket in question. Unless splunk-optimize brings the count below 100, indexer will remain paused.&lt;BR /&gt;&lt;BR /&gt;On the forwarder side make sure not too many events hit the same indexer.&lt;BR /&gt;1. On SH/CM/UF you can enable &lt;A href="https://www.linkedin.com/pulse/splunk-asynchronous-forwarding-lightning-fast-data-ingestor-rawat/" target="_self"&gt;volume based forwarding&amp;nbsp;&lt;/A&gt;&lt;/P&gt;&lt;P&gt;2. From all instances SH/CM/UF/IDX, reduce &lt;A href="https://www.linkedin.com/posts/harendra-rawat-b10b41_new-splunk-metrics-logging-interval-activity-7206272672643629056-64bJ?utm_source=share&amp;amp;utm_medium=member_desktop" target="_self"&gt;unwanted metrics.log events&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 12 Jun 2024 21:31:23 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-has-the-index-process-paused-data-flow-How-to-handle-too/m-p/690548#M114894</guid>
      <dc:creator>hrawat</dc:creator>
      <dc:date>2024-06-12T21:31:23Z</dc:date>
    </item>
    <item>
      <title>Re: The index process has paused data flow. Too many tsidx files</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-has-the-index-process-paused-data-flow-How-to-handle-too/m-p/690550#M114895</link>
      <description>&lt;P&gt;If you have only one UF, few SHs and still internal index is pausing, it's likely the system is running out of CPU due to high load/search activity or there is some I/O performance issue.&lt;/P&gt;</description>
      <pubDate>Wed, 12 Jun 2024 21:36:50 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-has-the-index-process-paused-data-flow-How-to-handle-too/m-p/690550#M114895</guid>
      <dc:creator>hrawat</dc:creator>
      <dc:date>2024-06-12T21:36:50Z</dc:date>
    </item>
    <item>
      <title>Re: The index process has paused data flow. Too many tsidx files</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-has-the-index-process-paused-data-flow-How-to-handle-too/m-p/690551#M114896</link>
      <description>&lt;P&gt;Or one of the log file under &lt;EM&gt;var/log/splunk &lt;/EM&gt;is flooding.&lt;/P&gt;</description>
      <pubDate>Wed, 12 Jun 2024 21:40:06 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-has-the-index-process-paused-data-flow-How-to-handle-too/m-p/690551#M114896</guid>
      <dc:creator>hrawat</dc:creator>
      <dc:date>2024-06-12T21:40:06Z</dc:date>
    </item>
  </channel>
</rss>

