<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Investigating high IO on indexers in Monitoring Splunk</title>
    <link>https://community.splunk.com/t5/Monitoring-Splunk/Investigating-high-IO-on-indexers/m-p/491355#M4261</link>
    <description>&lt;P&gt;Glad it helped&lt;/P&gt;</description>
    <pubDate>Sat, 14 Mar 2020 21:59:43 GMT</pubDate>
    <dc:creator>gjanders</dc:creator>
    <dc:date>2020-03-14T21:59:43Z</dc:date>
    <item>
      <title>Investigating high IO on indexers</title>
      <link>https://community.splunk.com/t5/Monitoring-Splunk/Investigating-high-IO-on-indexers/m-p/491347#M4253</link>
      <description>&lt;P&gt;We've recently migrated from 12 indexers per site on a slower storage array to 24 indexers per site on much faster storage arrays. Since the move we have seen IO throughput on indexer luns peak at around 6 - 8 GB/s, per site - for anywhere between 5 and 30 minutes. When that happens we start getting throttled by the storage array and latency goes up (as expected). We'd like to dig into the queries that are running at this time and see if we can do something about them (delete them, rewrite them, add datamodels, etc).&lt;/P&gt;
&lt;P&gt;It's pretty easy to query the _internal index for sourcetype=scheduler and look at runtimes, etc. However, that doesn't give us an indication of how many buckets or slices were required to be examined by the indexers in order to satisfy the search.&lt;/P&gt;
&lt;P&gt;Does anyone have recommendations, example searches, etc, that we can use to dig into this?&lt;/P&gt;</description>
      <pubDate>Fri, 19 Jun 2020 01:16:32 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Monitoring-Splunk/Investigating-high-IO-on-indexers/m-p/491347#M4253</guid>
      <dc:creator>jarush</dc:creator>
      <dc:date>2020-06-19T01:16:32Z</dc:date>
    </item>
    <item>
      <title>Re: Investigating high IO on indexers</title>
      <link>https://community.splunk.com/t5/Monitoring-Splunk/Investigating-high-IO-on-indexers/m-p/491348#M4254</link>
      <description>&lt;P&gt;Curious what version Splunk are you running?  We recently had I/O issues on 8.0.1.  Started spontaneously a couple of weekends ago. &lt;BR /&gt;
 Restarting individual indexers resolved it.&lt;/P&gt;</description>
      <pubDate>Thu, 12 Mar 2020 17:45:10 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Monitoring-Splunk/Investigating-high-IO-on-indexers/m-p/491348#M4254</guid>
      <dc:creator>satyenshahusda</dc:creator>
      <dc:date>2020-03-12T17:45:10Z</dc:date>
    </item>
    <item>
      <title>Re: Investigating high IO on indexers</title>
      <link>https://community.splunk.com/t5/Monitoring-Splunk/Investigating-high-IO-on-indexers/m-p/491349#M4255</link>
      <description>&lt;P&gt;7.3.3 across the board&lt;/P&gt;</description>
      <pubDate>Thu, 12 Mar 2020 18:33:20 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Monitoring-Splunk/Investigating-high-IO-on-indexers/m-p/491349#M4255</guid>
      <dc:creator>jarush</dc:creator>
      <dc:date>2020-03-12T18:33:20Z</dc:date>
    </item>
    <item>
      <title>Re: Investigating high IO on indexers</title>
      <link>https://community.splunk.com/t5/Monitoring-Splunk/Investigating-high-IO-on-indexers/m-p/491350#M4256</link>
      <description>&lt;P&gt;please verify the array IOPS specification, according to splunk docs, each drive should have 200 average IOPS. The configuration of the disks have to be Disks (RAID) 1+0 fault tolerance scheme as the disk &lt;BR /&gt;
Here is the document with more information -&amp;gt; &lt;A href="https://docs.splunk.com/Documentation/Splunk/8.0.2/Capacity/Referencehardware#Disk_subsystem"&gt;https://docs.splunk.com/Documentation/Splunk/8.0.2/Capacity/Referencehardware#Disk_subsystem&lt;/A&gt;&lt;/P&gt;

&lt;P&gt;Check this steps to troubleshoot the index performance issues&lt;BR /&gt;
&lt;A href="https://docs.splunk.com/Documentation/Splunk/8.0.2/Troubleshooting/Troubleshootindexingperformance"&gt;https://docs.splunk.com/Documentation/Splunk/8.0.2/Troubleshooting/Troubleshootindexingperformance&lt;/A&gt;&lt;/P&gt;

&lt;P&gt;and&lt;/P&gt;

&lt;P&gt;&lt;A href="https://docs.splunk.com/Documentation/Splunk/8.0.2/Troubleshooting/Troubleshootingeventsindexingdelay"&gt;https://docs.splunk.com/Documentation/Splunk/8.0.2/Troubleshooting/Troubleshootingeventsindexingdelay&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 13 Mar 2020 04:04:22 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Monitoring-Splunk/Investigating-high-IO-on-indexers/m-p/491350#M4256</guid>
      <dc:creator>ivanreis</dc:creator>
      <dc:date>2020-03-13T04:04:22Z</dc:date>
    </item>
    <item>
      <title>Re: Investigating high IO on indexers</title>
      <link>https://community.splunk.com/t5/Monitoring-Splunk/Investigating-high-IO-on-indexers/m-p/491351#M4257</link>
      <description>&lt;P&gt;In &lt;A href="https://splunkbase.splunk.com/app/3796/" target="_blank"&gt;Alerts for Splunk Admins&lt;/A&gt; or &lt;A href="https://github.com/gjanders/SplunkAdmins/tree/master/default/data/ui/views" target="_blank"&gt;github&lt;/A&gt; there are a few dashboards:&lt;BR /&gt;
troubleshooting_indexer_cpu.xml &lt;BR /&gt;
troubleshooting_resource_usage_per_user.xml&lt;/P&gt;

&lt;P&gt;Or report wise summary / metrics searches:&lt;BR /&gt;
SearchHeadLevel - platform_stats.user_stats.introspection metrics populating search&lt;/P&gt;

&lt;P&gt;Would give you similar info but its more designed to output to a metrics index for later use...&lt;/P&gt;</description>
      <pubDate>Wed, 30 Sep 2020 04:37:54 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Monitoring-Splunk/Investigating-high-IO-on-indexers/m-p/491351#M4257</guid>
      <dc:creator>gjanders</dc:creator>
      <dc:date>2020-09-30T04:37:54Z</dc:date>
    </item>
    <item>
      <title>Re: Investigating high IO on indexers</title>
      <link>https://community.splunk.com/t5/Monitoring-Splunk/Investigating-high-IO-on-indexers/m-p/491352#M4258</link>
      <description>&lt;P&gt;We have the app installed - i'm not really seeing anything in there that would help me drill into searches that are consuming the most IO.  Is there a particular one you are thinking of?&lt;/P&gt;</description>
      <pubDate>Fri, 13 Mar 2020 11:55:56 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Monitoring-Splunk/Investigating-high-IO-on-indexers/m-p/491352#M4258</guid>
      <dc:creator>jarush</dc:creator>
      <dc:date>2020-03-13T11:55:56Z</dc:date>
    </item>
    <item>
      <title>Re: Investigating high IO on indexers</title>
      <link>https://community.splunk.com/t5/Monitoring-Splunk/Investigating-high-IO-on-indexers/m-p/491353#M4259</link>
      <description>&lt;P&gt;Total read mb for example in the mentioned dashboard is coming from the introspection logs and related to I/O. Although this will measure from searches not ingestion &lt;/P&gt;</description>
      <pubDate>Sat, 14 Mar 2020 04:08:50 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Monitoring-Splunk/Investigating-high-IO-on-indexers/m-p/491353#M4259</guid>
      <dc:creator>gjanders</dc:creator>
      <dc:date>2020-03-14T04:08:50Z</dc:date>
    </item>
    <item>
      <title>Re: Investigating high IO on indexers</title>
      <link>https://community.splunk.com/t5/Monitoring-Splunk/Investigating-high-IO-on-indexers/m-p/491354#M4260</link>
      <description>&lt;P&gt;Thanks, this got me going in the right direction.  This ended up being the Splunk CS Toolkit app in the DMC.  There were two queries that were destroying our storage: Splunk_index_lookup_genator and sta_forwarder_inventory.  Using the below query we found they were doing two orders of magnitude more IO than all other queries:&lt;BR /&gt;
    index=_introspection host=* source=&lt;EM&gt;/resource_usage.log&lt;/EM&gt; component=PerProcess data.process_type="search" &lt;BR /&gt;
    | stats sum(data.read_mb) by data.search_props.app, data.search_props.label&lt;/P&gt;</description>
      <pubDate>Wed, 30 Sep 2020 04:38:22 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Monitoring-Splunk/Investigating-high-IO-on-indexers/m-p/491354#M4260</guid>
      <dc:creator>jarush</dc:creator>
      <dc:date>2020-09-30T04:38:22Z</dc:date>
    </item>
    <item>
      <title>Re: Investigating high IO on indexers</title>
      <link>https://community.splunk.com/t5/Monitoring-Splunk/Investigating-high-IO-on-indexers/m-p/491355#M4261</link>
      <description>&lt;P&gt;Glad it helped&lt;/P&gt;</description>
      <pubDate>Sat, 14 Mar 2020 21:59:43 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Monitoring-Splunk/Investigating-high-IO-on-indexers/m-p/491355#M4261</guid>
      <dc:creator>gjanders</dc:creator>
      <dc:date>2020-03-14T21:59:43Z</dc:date>
    </item>
  </channel>
</rss>

