<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Saturated Event-Processing Queues in Monitoring Splunk</title>
    <link>https://community.splunk.com/t5/Monitoring-Splunk/Saturated-Event-Processing-Queues/m-p/525782#M4592</link>
    <description>&lt;P&gt;A full queue is caused by a slow-down after the queue or a sudden increase before the queue.&lt;/P&gt;&lt;P&gt;Check your storage system to make sure there is nothing that is causing the I/O rate to drop significantly, like an AV scan.&amp;nbsp; Splunk should not be sharing storage with other high-I/O applications like a DB.&lt;/P&gt;&lt;P&gt;A periodic surge in incoming data can also lead to backed-up queues.&amp;nbsp; Use the monitoring console to see what sources contributed a lot of data during the period of the slowdown.&lt;/P&gt;</description>
    <pubDate>Wed, 21 Oct 2020 13:25:15 GMT</pubDate>
    <dc:creator>richgalloway</dc:creator>
    <dc:date>2020-10-21T13:25:15Z</dc:date>
    <item>
      <title>Saturated Event-Processing Queues</title>
      <link>https://community.splunk.com/t5/Monitoring-Splunk/Saturated-Event-Processing-Queues/m-p/525657#M4590</link>
      <description>&lt;P&gt;I am getting this error frequently and I can see the index queue is 99% for many indexers in the cluster. I am not able to figure out what is causing this issue. During this period indexing is considerable slow and logs are not ingesting for many source type. I am not able to figure out what is causing this issue(which source). After sometime it go back to normal. I am worried this can case issue in the future.&lt;/P&gt;</description>
      <pubDate>Wed, 21 Oct 2020 00:59:47 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Monitoring-Splunk/Saturated-Event-Processing-Queues/m-p/525657#M4590</guid>
      <dc:creator>msplunk33</dc:creator>
      <dc:date>2020-10-21T00:59:47Z</dc:date>
    </item>
    <item>
      <title>Re: Saturated Event-Processing Queues</title>
      <link>https://community.splunk.com/t5/Monitoring-Splunk/Saturated-Event-Processing-Queues/m-p/525782#M4592</link>
      <description>&lt;P&gt;A full queue is caused by a slow-down after the queue or a sudden increase before the queue.&lt;/P&gt;&lt;P&gt;Check your storage system to make sure there is nothing that is causing the I/O rate to drop significantly, like an AV scan.&amp;nbsp; Splunk should not be sharing storage with other high-I/O applications like a DB.&lt;/P&gt;&lt;P&gt;A periodic surge in incoming data can also lead to backed-up queues.&amp;nbsp; Use the monitoring console to see what sources contributed a lot of data during the period of the slowdown.&lt;/P&gt;</description>
      <pubDate>Wed, 21 Oct 2020 13:25:15 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Monitoring-Splunk/Saturated-Event-Processing-Queues/m-p/525782#M4592</guid>
      <dc:creator>richgalloway</dc:creator>
      <dc:date>2020-10-21T13:25:15Z</dc:date>
    </item>
    <item>
      <title>Re: Saturated Event-Processing Queues</title>
      <link>https://community.splunk.com/t5/Monitoring-Splunk/Saturated-Event-Processing-Queues/m-p/525934#M4595</link>
      <description>&lt;P&gt;&lt;a href="https://community.splunk.com/t5/user/viewprofilepage/user-id/213957"&gt;@richgalloway&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;Use the monitoring console to see what sources contributed a lot of data during the period of the slowdown.&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;I could not find the above option in the monitoring&amp;nbsp;console. Could you give me the menu&amp;nbsp;details&amp;nbsp; from the monitoring console or a scereenshot.&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 21 Oct 2020 23:46:48 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Monitoring-Splunk/Saturated-Event-Processing-Queues/m-p/525934#M4595</guid>
      <dc:creator>msplunk33</dc:creator>
      <dc:date>2020-10-21T23:46:48Z</dc:date>
    </item>
    <item>
      <title>Re: Saturated Event-Processing Queues</title>
      <link>https://community.splunk.com/t5/Monitoring-Splunk/Saturated-Event-Processing-Queues/m-p/526036#M4598</link>
      <description>&lt;P&gt;In the MC, select Indexing-&amp;gt;Indexing Performance: Instance.&amp;nbsp; Then scroll down to the "Estimated Indexing Rate Per Sourcetype" panel.&amp;nbsp; Use the dropdown menu to split the graph by various attributes until you find the source of the problem.&lt;/P&gt;</description>
      <pubDate>Thu, 22 Oct 2020 13:49:25 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Monitoring-Splunk/Saturated-Event-Processing-Queues/m-p/526036#M4598</guid>
      <dc:creator>richgalloway</dc:creator>
      <dc:date>2020-10-22T13:49:25Z</dc:date>
    </item>
  </channel>
</rss>

