<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: How can we avoid indexing delays? in Getting Data In</title>
    <link>https://community.splunk.com/t5/Getting-Data-In/How-can-we-avoid-indexing-delays/m-p/485820#M83170</link>
    <description>&lt;P&gt;How can I check it?&lt;/P&gt;</description>
    <pubDate>Thu, 07 May 2020 13:26:25 GMT</pubDate>
    <dc:creator>danielbb</dc:creator>
    <dc:date>2020-05-07T13:26:25Z</dc:date>
    <item>
      <title>How can we avoid indexing delays?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/How-can-we-avoid-indexing-delays/m-p/485816#M83166</link>
      <description>&lt;P&gt;We have cases when the indexing delays are up to 15 minutes, it's rare but it happens. In these cases, we see that the indexing queues are at 80 – 100 percent capacity on three of the eight indexers. We see moderate bursts of data in these situations but not major bursts.&lt;/P&gt;

&lt;P&gt;These eight indexers use Hitachi G1500 arrays with FMD (flash memory drives). &lt;/P&gt;

&lt;P&gt;How can we understand better these situations and hopefully minimize the delays?&lt;/P&gt;</description>
      <pubDate>Fri, 01 May 2020 20:07:26 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/How-can-we-avoid-indexing-delays/m-p/485816#M83166</guid>
      <dc:creator>danielbb</dc:creator>
      <dc:date>2020-05-01T20:07:26Z</dc:date>
    </item>
    <item>
      <title>Re: How can we avoid indexing delays?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/How-can-we-avoid-indexing-delays/m-p/485817#M83167</link>
      <description>&lt;P&gt;the issue could be due to the soucetyping of the data.&lt;/P&gt;

&lt;P&gt;May be try going to the "Data Quality" dashboard in the Monitoring Console and check.&lt;BR /&gt;
It is available in Monitoring console --&amp;gt; Indexing --&amp;gt; Data Inputs --&amp;gt; Data quality&lt;BR /&gt;
It would should you the issues related to line breaking, time stamping or aggregation.&lt;/P&gt;

&lt;P&gt;Solve these issues and see if the indexing delays continue.&lt;/P&gt;</description>
      <pubDate>Fri, 01 May 2020 20:17:20 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/How-can-we-avoid-indexing-delays/m-p/485817#M83167</guid>
      <dc:creator>prachisaxena</dc:creator>
      <dc:date>2020-05-01T20:17:20Z</dc:date>
    </item>
    <item>
      <title>Re: How can we avoid indexing delays?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/How-can-we-avoid-indexing-delays/m-p/485818#M83168</link>
      <description>&lt;P&gt;We got the following query from our Sales Engineer -&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;index=_introspection host=host_name earliest=-24h sourcetype="splunk_resource_usage" data.avg_total_ms&amp;gt;0 component=IOstats 
| timechart span=1m avg("data.avg_total_ms") by host 
| eval threshold = 10
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;We have eight indexers and we can easily see that for some indexers we have a couple of milliseconds delay throughout the day, some are up and down the 10 ms threshold and two that consistently reach the 30 or 40 ms level during the day.  &lt;/P&gt;

&lt;P&gt;Is 10 milliseconds the threshold? If we go over it consistently, does it mean that we have a hardware issue?&lt;/P&gt;</description>
      <pubDate>Wed, 06 May 2020 17:08:59 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/How-can-we-avoid-indexing-delays/m-p/485818#M83168</guid>
      <dc:creator>danielbb</dc:creator>
      <dc:date>2020-05-06T17:08:59Z</dc:date>
    </item>
    <item>
      <title>Re: How can we avoid indexing delays?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/How-can-we-avoid-indexing-delays/m-p/485819#M83169</link>
      <description>&lt;P&gt;How is the data distribution on your indexers also? If you have forwarders that are sticking to the three indexers, it could cause delays as the queues are filled.&lt;/P&gt;</description>
      <pubDate>Thu, 07 May 2020 04:27:27 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/How-can-we-avoid-indexing-delays/m-p/485819#M83169</guid>
      <dc:creator>esix_splunk</dc:creator>
      <dc:date>2020-05-07T04:27:27Z</dc:date>
    </item>
    <item>
      <title>Re: How can we avoid indexing delays?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/How-can-we-avoid-indexing-delays/m-p/485820#M83170</link>
      <description>&lt;P&gt;How can I check it?&lt;/P&gt;</description>
      <pubDate>Thu, 07 May 2020 13:26:25 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/How-can-we-avoid-indexing-delays/m-p/485820#M83170</guid>
      <dc:creator>danielbb</dc:creator>
      <dc:date>2020-05-07T13:26:25Z</dc:date>
    </item>
    <item>
      <title>Re: How can we avoid indexing delays?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/How-can-we-avoid-indexing-delays/m-p/485821#M83171</link>
      <description>&lt;P&gt;The issue is likely with the load balancing setting on your forwarders in outputs.conf&lt;/P&gt;

&lt;P&gt;If you have autoLBVolume set, try disabling it. That parameter causes a forwarder to switch to another indexer only after a certain amount of data has been sent.&lt;/P&gt;

&lt;P&gt;A more preferred method, especially when you see some indexers receiving more data than others as you describe, is to use autoLBFrequency. This setting will force the forwarder to switch to another indexer after a specified interval (in seconds), rather than amount of data sent. Typically you can get a better distribution with this setting in place, and a shorter interval setting.&lt;/P&gt;

&lt;P&gt;See the documentation here for more info:&lt;BR /&gt;
&lt;A href="https://docs.splunk.com/Documentation/Splunk/8.0.3/Admin/Outputsconf"&gt;https://docs.splunk.com/Documentation/Splunk/8.0.3/Admin/Outputsconf&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 08 May 2020 01:52:22 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/How-can-we-avoid-indexing-delays/m-p/485821#M83171</guid>
      <dc:creator>codebuilder</dc:creator>
      <dc:date>2020-05-08T01:52:22Z</dc:date>
    </item>
    <item>
      <title>Re: How can we avoid indexing delays?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/How-can-we-avoid-indexing-delays/m-p/485822#M83172</link>
      <description>&lt;P&gt;monitoring console (it has a views for this) or there are some dashboards in &lt;A href="https://splunkbase.splunk.com/app/3796/"&gt;Alerts for Splunk Admins&lt;/A&gt; to assist in visualising data across the indexing tier via the metrics.log file&lt;/P&gt;</description>
      <pubDate>Fri, 08 May 2020 11:57:37 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/How-can-we-avoid-indexing-delays/m-p/485822#M83172</guid>
      <dc:creator>gjanders</dc:creator>
      <dc:date>2020-05-08T11:57:37Z</dc:date>
    </item>
    <item>
      <title>Re: How can we avoid indexing delays?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/How-can-we-avoid-indexing-delays/m-p/485823#M83173</link>
      <description>&lt;P&gt;The upgrade to Red Hat 7 seemed to cause deterioration in io response time. Are there any known issues with Red Hat 7 in this regard?&lt;/P&gt;

&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper" image-alt="alt text"&gt;&lt;img src="https://community.splunk.com/t5/image/serverpage/image-id/8786iCE5D9E797BD0A490/image-size/large?v=v2&amp;amp;px=999" role="button" title="alt text" alt="alt text" /&gt;&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Sat, 09 May 2020 17:04:05 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/How-can-we-avoid-indexing-delays/m-p/485823#M83173</guid>
      <dc:creator>danielbb</dc:creator>
      <dc:date>2020-05-09T17:04:05Z</dc:date>
    </item>
  </channel>
</rss>

