<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Why are we receiving this ingestion latency error after updating to 8.2.1? in Splunk Enterprise</title>
    <link>https://community.splunk.com/t5/Splunk-Enterprise/Why-are-we-receiving-this-ingestion-latency-error-after-updating/m-p/700531#M20355</link>
    <description>&lt;P&gt;Hello&lt;SPAN class=""&gt;&lt;A class="" href="https://community.splunk.com/t5/user/viewprofilepage/user-id/48874" target="_self"&gt;&lt;SPAN class=""&gt;,&lt;/SPAN&gt;&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;could you tell me about a real workaroud, this one is only disabling report, great thanks in advance...&lt;/P&gt;&lt;P&gt;Best regards&lt;/P&gt;&lt;P&gt;__&lt;/P&gt;&lt;P&gt;Philipp from France&lt;/P&gt;</description>
    <pubDate>Mon, 30 Sep 2024 12:03:34 GMT</pubDate>
    <dc:creator>frenchy35</dc:creator>
    <dc:date>2024-09-30T12:03:34Z</dc:date>
    <item>
      <title>Why are we receiving this ingestion latency error after updating to 8.2.1?</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/Why-are-we-receiving-this-ingestion-latency-error-after-updating/m-p/558579#M6329</link>
      <description>&lt;P&gt;So we just updated to 8.2.1 and we are now getting an Ingestion Latency error…&lt;/P&gt;
&lt;P&gt;How do we correct it? Here is what the link says and then we have an option to view the last 50 messages...&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;SPAN&gt;Ingestion Latency&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Root Cause(s):&lt;/STRONG&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN&gt;Events from tracker.log have not been seen for the last 6529 seconds, which is more than the red threshold (210 seconds). This typically occurs when indexing or forwarding are falling behind or are blocked.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Events from tracker.log are delayed for 9658 seconds, which is more than the red threshold (180 seconds). This typically occurs when indexing or forwarding are falling behind or are blocked.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;A href="http://capstoneedm.com:8000/en-US/app/splunk_rapid_diag/task_template_wizard?feature=undefined" target="_blank" rel="noopener"&gt;Generate Diag&lt;/A&gt;&lt;SPAN&gt;&lt;SPAN&gt;?&lt;/SPAN&gt;If filing a support case, click here to generate a diag.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN&gt;Here are some examples of what is shown as the messages:&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;07-01-2021 09:28:52.276 -0500 INFO TailingProcessor [66180 MainTailingThread] - Adding watch on path: C:\Program Files\Splunk\var\spool\splunk.&lt;/LI&gt;
&lt;LI&gt;07-01-2021 09:28:52.276 -0500 INFO TailingProcessor [66180 MainTailingThread] - Adding watch on path: C:\Program Files\Splunk\var\run\splunk\search_telemetry.&lt;/LI&gt;
&lt;LI&gt;07-01-2021 09:28:52.276 -0500 INFO TailingProcessor [66180 MainTailingThread] - Adding watch on path: C:\Program Files\Splunk\var\log\watchdog.&lt;/LI&gt;
&lt;LI&gt;07-01-2021 09:28:52.276 -0500 INFO TailingProcessor [66180 MainTailingThread] - Adding watch on path: C:\Program Files\Splunk\var\log\splunk.&lt;/LI&gt;
&lt;LI&gt;07-01-2021 09:28:52.276 -0500 INFO TailingProcessor [66180 MainTailingThread] - Adding watch on path: C:\Program Files\Splunk\var\log\introspection.&lt;/LI&gt;
&lt;LI&gt;07-01-2021 09:28:52.275 -0500 INFO TailingProcessor [66180 MainTailingThread] - Adding watch on path: C:\Program Files\Splunk\etc\splunk.version.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN&gt;07-01-2021 09:28:52.269 -0500 INFO TailingProcessor [66180 MainTailingThread] - Adding watch on path: C:\Program Files\CrushFTP9\CrushFTP.log.&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;07-01-2021 09:28:52.268 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: monitor://$SPLUNK_HOME\var\log\watchdog\watchdog.log*.&lt;/LI&gt;
&lt;LI&gt;07-01-2021 09:28:52.267 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: monitor://$SPLUNK_HOME\var\log\splunk\splunk_instrumentation_cloud.log*.&lt;/LI&gt;
&lt;LI&gt;07-01-2021 09:28:52.267 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: monitor://$SPLUNK_HOME\var\log\splunk\license_usage_summary.log.&lt;/LI&gt;
&lt;LI&gt;07-01-2021 09:28:52.267 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: monitor://$SPLUNK_HOME\var\log\splunk.&lt;/LI&gt;
&lt;LI&gt;07-01-2021 09:28:52.267 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: monitor://$SPLUNK_HOME\var\log\introspection.&lt;/LI&gt;
&lt;LI&gt;07-01-2021 09:28:52.267 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: monitor://$SPLUNK_HOME\etc\splunk.version.&lt;/LI&gt;
&lt;LI&gt;07-01-2021 09:28:52.267 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: batch://$SPLUNK_HOME\var\spool\splunk\tracker.log*.&lt;/LI&gt;
&lt;LI&gt;07-01-2021 09:28:52.266 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: batch://$SPLUNK_HOME\var\spool\splunk\...stash_new.&lt;/LI&gt;
&lt;LI&gt;07-01-2021 09:28:52.266 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: batch://$SPLUNK_HOME\var\spool\splunk\...stash_hec.&lt;/LI&gt;
&lt;LI&gt;07-01-2021 09:28:52.266 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: batch://$SPLUNK_HOME\var\spool\splunk.&lt;/LI&gt;
&lt;LI&gt;07-01-2021 09:28:52.265 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: batch://$SPLUNK_HOME\var\run\splunk\search_telemetry\*search_telemetry.json.&lt;/LI&gt;
&lt;LI&gt;07-01-2021 09:28:52.265 -0500 INFO TailingProcessor [66180 MainTailingThread] - TailWatcher initializing...&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Tue, 28 Jun 2022 23:41:52 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/Why-are-we-receiving-this-ingestion-latency-error-after-updating/m-p/558579#M6329</guid>
      <dc:creator>Marc_Williams</dc:creator>
      <dc:date>2022-06-28T23:41:52Z</dc:date>
    </item>
    <item>
      <title>Re: Ingestion Latency after updating to 8.2.1</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/Why-are-we-receiving-this-ingestion-latency-error-after-updating/m-p/563698#M9723</link>
      <description>&lt;P&gt;Hi Marc,&lt;/P&gt;&lt;P&gt;We are facing the same issue after 8.2.1 upgrade&lt;BR /&gt;Have you already found a solution?&lt;/P&gt;&lt;P&gt;Greetings,&lt;BR /&gt;Justyna&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 17 Aug 2021 17:49:37 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/Why-are-we-receiving-this-ingestion-latency-error-after-updating/m-p/563698#M9723</guid>
      <dc:creator>justynap_ldz</dc:creator>
      <dc:date>2021-08-17T17:49:37Z</dc:date>
    </item>
    <item>
      <title>Re: Ingestion Latency after updating to 8.2.1</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/Why-are-we-receiving-this-ingestion-latency-error-after-updating/m-p/563719#M9725</link>
      <description>&lt;P&gt;I am having this issue as well.&amp;nbsp; Would appreciate any information you've been able to dig up.&lt;/P&gt;</description>
      <pubDate>Tue, 17 Aug 2021 22:23:29 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/Why-are-we-receiving-this-ingestion-latency-error-after-updating/m-p/563719#M9725</guid>
      <dc:creator>JeLangley</dc:creator>
      <dc:date>2021-08-17T22:23:29Z</dc:date>
    </item>
    <item>
      <title>Re: Ingestion Latency after updating to 8.2.1</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/Why-are-we-receiving-this-ingestion-latency-error-after-updating/m-p/563782#M9734</link>
      <description>&lt;P&gt;No....I have not found a solution. However it appears to have cleared itself.&lt;/P&gt;</description>
      <pubDate>Wed, 18 Aug 2021 13:20:16 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/Why-are-we-receiving-this-ingestion-latency-error-after-updating/m-p/563782#M9734</guid>
      <dc:creator>Marc_Williams</dc:creator>
      <dc:date>2021-08-18T13:20:16Z</dc:date>
    </item>
    <item>
      <title>Re: Ingestion Latency after updating to 8.2.1</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/Why-are-we-receiving-this-ingestion-latency-error-after-updating/m-p/566162#M9879</link>
      <description>&lt;P&gt;I am also having this issue but only one one of 6 splunk servers.&amp;nbsp; The other Splunk servers do not have a tracker.log.&amp;nbsp; This log is not listed in:&amp;nbsp;&lt;A href="https://docs.splunk.com/Documentation/Splunk/8.2.2/Troubleshooting/Enabledebuglogging#log-local.cfg" target="_blank" rel="noopener"&gt;https://docs.splunk.com/Documentation/Splunk/8.2.2/Troubleshooting/Enabledebuglogging#log-local.cfg&lt;/A&gt;&amp;nbsp;as a splunk log so I wonder if it has something to be done with the upgrade.&amp;nbsp;&amp;nbsp;&lt;/P&gt;&lt;P&gt;It has been 1 week since my upgrade and this is the only server complaining.&amp;nbsp; Would really like to know what this log is and why it is having issues.&amp;nbsp; I checked file permissions and it is the same as the other logs....&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;This log is in /var/spool/splunk and is a default to be monitored in the /splunk/etc/system/default/inputs.con and is listed as a latency tracker.&amp;nbsp; of my 6 servers only the search head running ES even has this log in the director y&lt;/P&gt;</description>
      <pubDate>Tue, 07 Sep 2021 16:34:19 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/Why-are-we-receiving-this-ingestion-latency-error-after-updating/m-p/566162#M9879</guid>
      <dc:creator>Funderburg78</dc:creator>
      <dc:date>2021-09-07T16:34:19Z</dc:date>
    </item>
    <item>
      <title>Re: Ingestion Latency after updating to 8.2.1</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/Why-are-we-receiving-this-ingestion-latency-error-after-updating/m-p/569538#M10213</link>
      <description>&lt;P&gt;Same here, on Splunk Ent. v8.2.2&lt;/P&gt;</description>
      <pubDate>Mon, 04 Oct 2021 11:50:18 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/Why-are-we-receiving-this-ingestion-latency-error-after-updating/m-p/569538#M10213</guid>
      <dc:creator>apietersen</dc:creator>
      <dc:date>2021-10-04T11:50:18Z</dc:date>
    </item>
    <item>
      <title>Re: Ingestion Latency after updating to 8.2.1</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/Why-are-we-receiving-this-ingestion-latency-error-after-updating/m-p/569565#M10218</link>
      <description>&lt;P&gt;I am going to reach out to support when I get a chance and will update here when I have found a solution/workaround of some sort.&amp;nbsp; My OS is Linux and the log path/permission looks fine from my perspective as well.&amp;nbsp; We upgraded over a month ago and this issue persists but only on our indexer.&amp;nbsp; Our heavy forwarders are not affected by this.&lt;/P&gt;</description>
      <pubDate>Mon, 04 Oct 2021 14:56:41 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/Why-are-we-receiving-this-ingestion-latency-error-after-updating/m-p/569565#M10218</guid>
      <dc:creator>JeLangley</dc:creator>
      <dc:date>2021-10-04T14:56:41Z</dc:date>
    </item>
    <item>
      <title>Re: Ingestion Latency after updating to 8.2.1</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/Why-are-we-receiving-this-ingestion-latency-error-after-updating/m-p/571344#M10375</link>
      <description>&lt;P&gt;Have you heard back from support regarding this issue? &amp;nbsp;We have been running on 8.2.2 for several weeks without issue, but today noticed this on one of the search heads within the SHC.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 18 Oct 2021 14:22:02 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/Why-are-we-receiving-this-ingestion-latency-error-after-updating/m-p/571344#M10375</guid>
      <dc:creator>kisstian</dc:creator>
      <dc:date>2021-10-18T14:22:02Z</dc:date>
    </item>
    <item>
      <title>Re: Ingestion Latency after updating to 8.2.1</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/Why-are-we-receiving-this-ingestion-latency-error-after-updating/m-p/572090#M10443</link>
      <description>&lt;P&gt;Also seeing this issue after moving from 8.1.2 to 8.2.2. We are using older hardware, but this makes me think it is not necessarily related. It comes and goes throughout the day.&lt;/P&gt;</description>
      <pubDate>Fri, 22 Oct 2021 22:08:46 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/Why-are-we-receiving-this-ingestion-latency-error-after-updating/m-p/572090#M10443</guid>
      <dc:creator>salbro</dc:creator>
      <dc:date>2021-10-22T22:08:46Z</dc:date>
    </item>
    <item>
      <title>Re: Ingestion Latency after updating to 8.2.1</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/Why-are-we-receiving-this-ingestion-latency-error-after-updating/m-p/572533#M10472</link>
      <description>&lt;P&gt;So we thought we had it resolved. However it is back again.&lt;/P&gt;&lt;P&gt;We restart the services and we can watch it go from good to bad.&lt;/P&gt;&lt;P&gt;Anyone else had luck finding an answer?&lt;/P&gt;</description>
      <pubDate>Tue, 26 Oct 2021 19:18:13 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/Why-are-we-receiving-this-ingestion-latency-error-after-updating/m-p/572533#M10472</guid>
      <dc:creator>Marc_Williams</dc:creator>
      <dc:date>2021-10-26T19:18:13Z</dc:date>
    </item>
    <item>
      <title>Re: Ingestion Latency after updating to 8.2.1</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/Why-are-we-receiving-this-ingestion-latency-error-after-updating/m-p/572540#M10473</link>
      <description>&lt;P&gt;So we upgraded to 8.2.2.1 and are still getting the error. However it is a bit different than before.&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;SPAN&gt;Events from tracker.log have not been seen for the last 1395 seconds, which is more than the red threshold (210 seconds). This typically occurs when indexing or forwarding are falling behind or are blocked.&lt;/SPAN&gt;&lt;/LI&gt;&lt;/UL&gt;</description>
      <pubDate>Tue, 26 Oct 2021 20:22:34 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/Why-are-we-receiving-this-ingestion-latency-error-after-updating/m-p/572540#M10473</guid>
      <dc:creator>Marc_Williams</dc:creator>
      <dc:date>2021-10-26T20:22:34Z</dc:date>
    </item>
    <item>
      <title>Re: Ingestion Latency after updating to 8.2.1</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/Why-are-we-receiving-this-ingestion-latency-error-after-updating/m-p/572623#M10477</link>
      <description>&lt;P&gt;me too looking for a solution to address this ingestion latency....&lt;/P&gt;</description>
      <pubDate>Wed, 27 Oct 2021 10:28:25 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/Why-are-we-receiving-this-ingestion-latency-error-after-updating/m-p/572623#M10477</guid>
      <dc:creator>yukiang</dc:creator>
      <dc:date>2021-10-27T10:28:25Z</dc:date>
    </item>
    <item>
      <title>Re: Ingestion Latency after updating to 8.2.1</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/Why-are-we-receiving-this-ingestion-latency-error-after-updating/m-p/578354#M10957</link>
      <description>&lt;P&gt;We had this problem after upgrading to v8.2.3 and have found a solution.&lt;/P&gt;&lt;P&gt;After disabling the SplunkUniversal Forwarder, the SplunkLightForwarder and the SplunkForwarder on splunkdev01, the system returned to normal operation. These apps were enabled on the Indexer and should have been disabled by default. Also when trying to load a UniversalForwarder that is not compatible to v8.2.3, it will cause ingestion latency and tailreader errors. We had some Solaris 5.1 servers (forwarders) that are no longer compatible with upgrades so we just kept them on 8.0.5. The upgrade requires Solaris 11 or more.&lt;/P&gt;&lt;P&gt;The first thing I did was go to the web interface, Manage Apps and searched *forward*.&lt;/P&gt;&lt;P&gt;This showed the three Forwarders that I needed to disable and I disabled them on the interface.&lt;/P&gt;&lt;P&gt;I also&amp;nbsp; typed these commands in unix on the indexer:&lt;/P&gt;&lt;P&gt;splunk disable app SplunkForwarder -auth &amp;lt;username&amp;gt;:&amp;lt;password&amp;gt;&lt;BR /&gt;splunk disable app SplunkLight -auth &amp;lt;username&amp;gt;:&amp;lt;password&amp;gt;&lt;BR /&gt;splunk disable app SplunkUniversalForwarder -auth &amp;lt;username&amp;gt;:&amp;lt;password&amp;gt;&lt;BR /&gt;&lt;BR /&gt;After doing these things the ingestion latency and tailreader errors stopped.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 14 Dec 2021 17:24:15 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/Why-are-we-receiving-this-ingestion-latency-error-after-updating/m-p/578354#M10957</guid>
      <dc:creator>PeteAve</dc:creator>
      <dc:date>2021-12-14T17:24:15Z</dc:date>
    </item>
    <item>
      <title>Re: Ingestion Latency after updating to 8.2.1</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/Why-are-we-receiving-this-ingestion-latency-error-after-updating/m-p/579286#M11001</link>
      <description>&lt;P&gt;I am also facing the same problem.&amp;nbsp; Server IOPS is 2000, still getting IOWAIT and ingesting latency error very frequently and then it goes away.&lt;/P&gt;</description>
      <pubDate>Mon, 27 Dec 2021 09:02:55 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/Why-are-we-receiving-this-ingestion-latency-error-after-updating/m-p/579286#M11001</guid>
      <dc:creator>sombhtr239</dc:creator>
      <dc:date>2021-12-27T09:02:55Z</dc:date>
    </item>
    <item>
      <title>Re: Ingestion Latency after updating to 8.2.1</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/Why-are-we-receiving-this-ingestion-latency-error-after-updating/m-p/579287#M11002</link>
      <description>&lt;P&gt;Anyone having solution please help&lt;/P&gt;</description>
      <pubDate>Mon, 27 Dec 2021 09:03:37 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/Why-are-we-receiving-this-ingestion-latency-error-after-updating/m-p/579287#M11002</guid>
      <dc:creator>sombhtr239</dc:creator>
      <dc:date>2021-12-27T09:03:37Z</dc:date>
    </item>
    <item>
      <title>Re: Ingestion Latency after updating to 8.2.1</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/Why-are-we-receiving-this-ingestion-latency-error-after-updating/m-p/591410#M11990</link>
      <description>&lt;P&gt;FWIW, we just upgraded from 8.1.3 to 8.2.5 tonight, and are facing exactly these same issues.&lt;/P&gt;&lt;P&gt;Only difference is that these forwarder apps are already disabled on our instance.&lt;/P&gt;&lt;P&gt;Is there any update from Splunk support on this issue?&lt;/P&gt;</description>
      <pubDate>Wed, 30 Mar 2022 06:58:17 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/Why-are-we-receiving-this-ingestion-latency-error-after-updating/m-p/591410#M11990</guid>
      <dc:creator>phil__tanner</dc:creator>
      <dc:date>2022-03-30T06:58:17Z</dc:date>
    </item>
    <item>
      <title>Re: Ingestion Latency after updating to 8.2.1</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/Why-are-we-receiving-this-ingestion-latency-error-after-updating/m-p/597476#M12507</link>
      <description>&lt;P&gt;We upgraded from 8.7.1 to 8.2.6 and we have the same tracker.log latency issue.&lt;/P&gt;&lt;P&gt;Please help us SPLUNK...&lt;/P&gt;</description>
      <pubDate>Wed, 11 May 2022 20:12:59 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/Why-are-we-receiving-this-ingestion-latency-error-after-updating/m-p/597476#M12507</guid>
      <dc:creator>dpalmer235</dc:creator>
      <dc:date>2022-05-11T20:12:59Z</dc:date>
    </item>
    <item>
      <title>Re: Ingestion Latency after updating to 8.2.1</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/Why-are-we-receiving-this-ingestion-latency-error-after-updating/m-p/600170#M12754</link>
      <description>&lt;P&gt;Commenting on this to be notified of the solution.&lt;/P&gt;</description>
      <pubDate>Wed, 01 Jun 2022 21:05:38 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/Why-are-we-receiving-this-ingestion-latency-error-after-updating/m-p/600170#M12754</guid>
      <dc:creator>andrew_burnett</dc:creator>
      <dc:date>2022-06-01T21:05:38Z</dc:date>
    </item>
    <item>
      <title>Re: Ingestion Latency after updating to 8.2.1</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/Why-are-we-receiving-this-ingestion-latency-error-after-updating/m-p/600179#M12755</link>
      <description>&lt;P&gt;My apologies, we actually redeployed for a separate issue we were facing so I never did contact them on this.&amp;nbsp;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 01 Jun 2022 21:57:28 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/Why-are-we-receiving-this-ingestion-latency-error-after-updating/m-p/600179#M12755</guid>
      <dc:creator>JeLangley</dc:creator>
      <dc:date>2022-06-01T21:57:28Z</dc:date>
    </item>
    <item>
      <title>Re: Ingestion Latency after updating to 8.2.1</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/Why-are-we-receiving-this-ingestion-latency-error-after-updating/m-p/600197#M12756</link>
      <description>&lt;P&gt;FWIW, my support case is still open. I still have no answers. Although I have many support people telling me the problem doesn't exist, so I reply with screenshots of the problem still existing.&lt;/P&gt;&lt;P&gt;The original resolution suggested was to disable the monitoring/alerting for this service. If anyone is interested in this solution, I'm happy to post it - but as it doesn't solve the underlying issue, and all it does is stop the alert telling you the issue exists, I haven't bothered testing/implementing it myself.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 02 Jun 2022 01:56:49 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/Why-are-we-receiving-this-ingestion-latency-error-after-updating/m-p/600197#M12756</guid>
      <dc:creator>phil__tanner</dc:creator>
      <dc:date>2022-06-02T01:56:49Z</dc:date>
    </item>
  </channel>
</rss>

