<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic considerations on using SSD for hot\cold indexes in Splunk Dev</title>
    <link>https://community.splunk.com/t5/Splunk-Dev/considerations-on-using-SSD-for-hot-cold-indexes/m-p/307037#M4053</link>
    <description>&lt;P&gt;for a small scale distributed (30GB p/d) splunk instance with indexes currently on one disk.&lt;/P&gt;

&lt;P&gt;Planning to introduce SSD for hot\warm index.&lt;/P&gt;

&lt;P&gt;I have read various posts and &lt;/P&gt;

&lt;P&gt;If we were to configure the indexes for say 30-60 days of hot warm data before being rolled to the slower disks would there be anything to consider such as :&lt;/P&gt;

&lt;P&gt;When a premium app such as ES also comes into play and the data model summary ranges are larger than the hot\warm retention.&lt;BR /&gt;
Eg: hot\warm index on SSD keep for 30 days then move to slower disk - however the authentication data model is configured for 1 year ? Would that be a factor to consider or not ?&lt;/P&gt;

&lt;P&gt;Anything else to consider ?&lt;/P&gt;

&lt;P&gt;gratzi.&lt;/P&gt;</description>
    <pubDate>Sun, 27 Aug 2017 13:59:12 GMT</pubDate>
    <dc:creator>Skins</dc:creator>
    <dc:date>2017-08-27T13:59:12Z</dc:date>
    <item>
      <title>considerations on using SSD for hot\cold indexes</title>
      <link>https://community.splunk.com/t5/Splunk-Dev/considerations-on-using-SSD-for-hot-cold-indexes/m-p/307037#M4053</link>
      <description>&lt;P&gt;for a small scale distributed (30GB p/d) splunk instance with indexes currently on one disk.&lt;/P&gt;

&lt;P&gt;Planning to introduce SSD for hot\warm index.&lt;/P&gt;

&lt;P&gt;I have read various posts and &lt;/P&gt;

&lt;P&gt;If we were to configure the indexes for say 30-60 days of hot warm data before being rolled to the slower disks would there be anything to consider such as :&lt;/P&gt;

&lt;P&gt;When a premium app such as ES also comes into play and the data model summary ranges are larger than the hot\warm retention.&lt;BR /&gt;
Eg: hot\warm index on SSD keep for 30 days then move to slower disk - however the authentication data model is configured for 1 year ? Would that be a factor to consider or not ?&lt;/P&gt;

&lt;P&gt;Anything else to consider ?&lt;/P&gt;

&lt;P&gt;gratzi.&lt;/P&gt;</description>
      <pubDate>Sun, 27 Aug 2017 13:59:12 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Dev/considerations-on-using-SSD-for-hot-cold-indexes/m-p/307037#M4053</guid>
      <dc:creator>Skins</dc:creator>
      <dc:date>2017-08-27T13:59:12Z</dc:date>
    </item>
    <item>
      <title>Re: considerations on using SSD for hot\cold indexes</title>
      <link>https://community.splunk.com/t5/Splunk-Dev/considerations-on-using-SSD-for-hot-cold-indexes/m-p/307038#M4054</link>
      <description>&lt;P&gt;You can configure the storage location for DMA summaries separately; find tstatsHomePath &lt;A href="http://docs.splunk.com/Documentation/Splunk/latest/Admin/indexesconf"&gt;here&lt;/A&gt;. &lt;BR /&gt;
Switching to SSD will greatly improve search performance for sparse and rare term searches, where random access speeds are important.&lt;BR /&gt;
For dense searches, things will get CPU bound, because removal of I/O constraints will mean your server will be mostly busy unzipping buckets. &lt;BR /&gt;
Hope that helps. &lt;/P&gt;</description>
      <pubDate>Sun, 27 Aug 2017 19:08:48 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Dev/considerations-on-using-SSD-for-hot-cold-indexes/m-p/307038#M4054</guid>
      <dc:creator>s2_splunk</dc:creator>
      <dc:date>2017-08-27T19:08:48Z</dc:date>
    </item>
    <item>
      <title>Re: considerations on using SSD for hot\cold indexes</title>
      <link>https://community.splunk.com/t5/Splunk-Dev/considerations-on-using-SSD-for-hot-cold-indexes/m-p/307039#M4055</link>
      <description>&lt;P&gt;gratzi,&lt;/P&gt;

&lt;P&gt;Would it be best practice to host the tstatsHomePath on the SSD also ? &lt;/P&gt;</description>
      <pubDate>Mon, 28 Aug 2017 01:51:05 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Dev/considerations-on-using-SSD-for-hot-cold-indexes/m-p/307039#M4055</guid>
      <dc:creator>Skins</dc:creator>
      <dc:date>2017-08-28T01:51:05Z</dc:date>
    </item>
    <item>
      <title>Re: considerations on using SSD for hot\cold indexes</title>
      <link>https://community.splunk.com/t5/Splunk-Dev/considerations-on-using-SSD-for-hot-cold-indexes/m-p/307040#M4056</link>
      <description>&lt;P&gt;If you have sufficient space, yes, absolutely. &lt;/P&gt;</description>
      <pubDate>Mon, 28 Aug 2017 15:29:47 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Dev/considerations-on-using-SSD-for-hot-cold-indexes/m-p/307040#M4056</guid>
      <dc:creator>s2_splunk</dc:creator>
      <dc:date>2017-08-28T15:29:47Z</dc:date>
    </item>
    <item>
      <title>Re: considerations on using SSD for hot\cold indexes</title>
      <link>https://community.splunk.com/t5/Splunk-Dev/considerations-on-using-SSD-for-hot-cold-indexes/m-p/307041#M4057</link>
      <description>&lt;P&gt;thx squire&lt;/P&gt;

&lt;P&gt;So using the following calculations from this search ..&lt;/P&gt;

&lt;P&gt;| dbinspect index=*&lt;BR /&gt;
| search tsidxState="full" &lt;BR /&gt;
| stats min(startEpoch) as MinStartTime max(startEpoch) as MaxStartTime min(endEpoch) as MinEndTime max(endEpoch) as MaxEndTime max(hostCount) as MaxHosts max(sourceTypeCount) as MaxSourceTypes sum(eventCount) as TotalEvents sum(rawSize) as TotalRawDataSizeMB sum(sizeOnDiskMB) as TotalDiskDataSizeMB by state &lt;BR /&gt;
| eval TotalRawDataSizeMB =round((TotalRawDataSizeMB/1024/1024),6) &lt;BR /&gt;
| eval MinStartTime=strftime(MinStartTime,"%Y/%m/%d %H:%M:%s") &lt;BR /&gt;
| eval MaxStartTime=strftime(MaxStartTime,"%Y/%m/%d %H:%M:%s") &lt;BR /&gt;
| eval MinEndTime=strftime(MinEndTime,"%Y/%m/%d %H:%M:%s") &lt;BR /&gt;
| eval MaxEndTime=strftime(MaxEndTime,"%Y/%m/%d %H:%M:%s") &lt;BR /&gt;
| eval PercentSizeReduction=round(((TotalRawDataSizeMB-TotalDiskDataSizeMB)/TotalRawDataSizeMB)*100,2)&lt;/P&gt;

&lt;P&gt;Run over a 90 day period &lt;BR /&gt;
(if that was how long i wanted to keep my hot\warm data before rolling to cold)&lt;/P&gt;

&lt;P&gt;state       TotalRawDataSizeMB  TotalDiskDataSizeMB PercentSizeReduction&lt;BR /&gt;
cold        27315.003618        8304.898440         69.60&lt;BR /&gt;
hot         49257.884926        15460.234388        68.61&lt;BR /&gt;
warm        1569389.609292      599056.425956       61.83&lt;/P&gt;

&lt;P&gt;Total hot &amp;amp; warm usage on disk = roughly 600GB&lt;/P&gt;

&lt;P&gt;So a 1TB SSD would suffice in this instance ?&lt;/P&gt;

&lt;P&gt;If a disk of that size was unavailable could we split those indexes and put the ones we use most on the SSD and the others leave where they are ? &lt;/P&gt;

&lt;P&gt;How would you make the same calculation for the DMA Summaries ?&lt;/P&gt;</description>
      <pubDate>Tue, 29 Aug 2017 06:12:15 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Dev/considerations-on-using-SSD-for-hot-cold-indexes/m-p/307041#M4057</guid>
      <dc:creator>Skins</dc:creator>
      <dc:date>2017-08-29T06:12:15Z</dc:date>
    </item>
  </channel>
</rss>

