<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Troubleshooting High Storage I/O Saturation Spikes? in Monitoring Splunk</title>
    <link>https://community.splunk.com/t5/Monitoring-Splunk/Troubleshooting-High-Storage-I-O-Saturation-Spikes/m-p/648059#M9676</link>
    <description>&lt;P&gt;We are periodically seeing spikes of Storage I/O Saturation (Monitoring Console &amp;gt; Resource Usage: Deployment).&amp;nbsp; When split by host we can see that this is affecting &lt;STRONG&gt;all&lt;/STRONG&gt; 6 indexers nearly simultaneously for the &lt;STRONG&gt;/opt/splunkdata&lt;/STRONG&gt; mount points.&amp;nbsp; As expected, this triggers the Health Status notification throughout the day (warning or alert).&lt;/P&gt;
&lt;P&gt;To note, Load Averages are regularly &amp;gt; 5% with CPU usage normally under 10% for each indexer (24 cores each).&amp;nbsp; RAM usage around 30% per indexer.&amp;nbsp; We are wondering if our physical storage and/or network might be a bottleneck or if it's something on the Splunk side.&lt;/P&gt;
&lt;P&gt;For a Splunk Admin beginner, could someone please offer some suggestions on where we could start troubleshooting these spikes or explain in more detail the specifics around Storage I/O Saturation?&lt;/P&gt;
&lt;P&gt;We are on Enterprise 9.0.4 across the board and considering the recent update sooner than later.&lt;/P&gt;
&lt;P&gt;Thank you!&lt;/P&gt;</description>
    <pubDate>Mon, 26 Jun 2023 14:45:52 GMT</pubDate>
    <dc:creator>tretrigh</dc:creator>
    <dc:date>2023-06-26T14:45:52Z</dc:date>
    <item>
      <title>Troubleshooting High Storage I/O Saturation Spikes?</title>
      <link>https://community.splunk.com/t5/Monitoring-Splunk/Troubleshooting-High-Storage-I-O-Saturation-Spikes/m-p/648059#M9676</link>
      <description>&lt;P&gt;We are periodically seeing spikes of Storage I/O Saturation (Monitoring Console &amp;gt; Resource Usage: Deployment).&amp;nbsp; When split by host we can see that this is affecting &lt;STRONG&gt;all&lt;/STRONG&gt; 6 indexers nearly simultaneously for the &lt;STRONG&gt;/opt/splunkdata&lt;/STRONG&gt; mount points.&amp;nbsp; As expected, this triggers the Health Status notification throughout the day (warning or alert).&lt;/P&gt;
&lt;P&gt;To note, Load Averages are regularly &amp;gt; 5% with CPU usage normally under 10% for each indexer (24 cores each).&amp;nbsp; RAM usage around 30% per indexer.&amp;nbsp; We are wondering if our physical storage and/or network might be a bottleneck or if it's something on the Splunk side.&lt;/P&gt;
&lt;P&gt;For a Splunk Admin beginner, could someone please offer some suggestions on where we could start troubleshooting these spikes or explain in more detail the specifics around Storage I/O Saturation?&lt;/P&gt;
&lt;P&gt;We are on Enterprise 9.0.4 across the board and considering the recent update sooner than later.&lt;/P&gt;
&lt;P&gt;Thank you!&lt;/P&gt;</description>
      <pubDate>Mon, 26 Jun 2023 14:45:52 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Monitoring-Splunk/Troubleshooting-High-Storage-I-O-Saturation-Spikes/m-p/648059#M9676</guid>
      <dc:creator>tretrigh</dc:creator>
      <dc:date>2023-06-26T14:45:52Z</dc:date>
    </item>
    <item>
      <title>Re: High Storage I/O Saturation Spikes</title>
      <link>https://community.splunk.com/t5/Monitoring-Splunk/Troubleshooting-High-Storage-I-O-Saturation-Spikes/m-p/648061#M9677</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.splunk.com/t5/user/viewprofilepage/user-id/252540"&gt;@tretrigh&lt;/a&gt;,&lt;/P&gt;&lt;P&gt;usually the issue in these situations is the storage:&lt;/P&gt;&lt;P&gt;which kind of storage are you using?&lt;/P&gt;&lt;P&gt;are you sure to have at least the requested 800 IOPS from your storage?&lt;/P&gt;&lt;P&gt;You can measure your storage performances using a tool as Bonnie++.&lt;/P&gt;&lt;P&gt;Ciao.&lt;/P&gt;&lt;P&gt;Giuseppe&lt;/P&gt;</description>
      <pubDate>Fri, 23 Jun 2023 15:48:39 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Monitoring-Splunk/Troubleshooting-High-Storage-I-O-Saturation-Spikes/m-p/648061#M9677</guid>
      <dc:creator>gcusello</dc:creator>
      <dc:date>2023-06-23T15:48:39Z</dc:date>
    </item>
    <item>
      <title>Re: High Storage I/O Saturation Spikes</title>
      <link>https://community.splunk.com/t5/Monitoring-Splunk/Troubleshooting-High-Storage-I-O-Saturation-Spikes/m-p/648070#M9678</link>
      <description>&lt;P&gt;Storage is all SSD on NetApp using RAID-DP connected using fibre channel backend.&amp;nbsp; I'm waiting to hear more about matching up times where we're seeing spikes with the guys in Infrastructure.&amp;nbsp; I'm unsure about the IOPS&amp;nbsp; limits at this point.&lt;/P&gt;&lt;P&gt;To note, I learned that the OS / disk and the /splunkdata disk for each indexer are all on the same aggregate.&amp;nbsp; As I am unfamiliar with NetApp, I don't know if this matters (but assuming it is okay)?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 23 Jun 2023 17:31:43 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Monitoring-Splunk/Troubleshooting-High-Storage-I-O-Saturation-Spikes/m-p/648070#M9678</guid>
      <dc:creator>tretrigh</dc:creator>
      <dc:date>2023-06-23T17:31:43Z</dc:date>
    </item>
    <item>
      <title>Re: High Storage I/O Saturation Spikes</title>
      <link>https://community.splunk.com/t5/Monitoring-Splunk/Troubleshooting-High-Storage-I-O-Saturation-Spikes/m-p/648090#M9679</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.splunk.com/t5/user/viewprofilepage/user-id/252540"&gt;@tretrigh&lt;/a&gt;,&lt;/P&gt;&lt;P&gt;Storage on SSD should give the requested performances.&lt;/P&gt;&lt;P&gt;All the indexers are in the same nove or in different ones?&lt;/P&gt;&lt;P&gt;Are resources shared or dedicated?, they shoud be dedicated.&lt;/P&gt;&lt;P&gt;maybe there's an momentary issue on NetApp.&lt;/P&gt;&lt;P&gt;Ciao.&lt;/P&gt;&lt;P&gt;Giuseppe&lt;/P&gt;</description>
      <pubDate>Sat, 24 Jun 2023 05:17:46 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Monitoring-Splunk/Troubleshooting-High-Storage-I-O-Saturation-Spikes/m-p/648090#M9679</guid>
      <dc:creator>gcusello</dc:creator>
      <dc:date>2023-06-24T05:17:46Z</dc:date>
    </item>
  </channel>
</rss>

