<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Attempt to thaw data slow and ended unexpectedly in Deployment Architecture</title>
    <link>https://community.splunk.com/t5/Deployment-Architecture/Attempt-to-thaw-data-slow-and-ended-unexpectedly/m-p/215535#M21613</link>
    <description>&lt;P&gt;The solution we used for this was actually to break it up into several Powershell scripts (5-10) and run them all concurrently. This didn't seem to impact performance on the indexer at all and each script ran at the same speed. So I guess if you need to thaw a few thousand buckets, do it in several concurrent scripts, unless you want to leave something running for several days.&lt;/P&gt;</description>
    <pubDate>Thu, 10 Nov 2016 15:16:56 GMT</pubDate>
    <dc:creator>michael_sleep</dc:creator>
    <dc:date>2016-11-10T15:16:56Z</dc:date>
    <item>
      <title>Attempt to thaw data slow and ended unexpectedly</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/Attempt-to-thaw-data-slow-and-ended-unexpectedly/m-p/215534#M21612</link>
      <description>&lt;P&gt;Hello, we have a clustered index that basically has one master indexer and two child indexers. Our data moves to a frozen state after 3 months and we needed to run a report on 6 months worth of data. I moved buckets from the necessary index from the "frozendb" folder to the "thaweddb" folder and created a script to issue the following command to each bucket (bucket names being whatever the bucket was obviously):&lt;/P&gt;

&lt;P&gt;splunk rebuild R:\splunkdb\sp_72_logs\thaweddb\rb_1469724228_1469715286_114_CACEB811-4B3C-4B60-AE46-A061185F4F10&lt;/P&gt;

&lt;P&gt;This process took over 2 days and was still running when my Powershell session was abruptly ended. I was curious what people would recommend doing in this case or if anyone has ever noted the thawing process to be extremely slow. There doesn't appear to be a way for me to know which buckets are and aren't thawed or where the data actually ends to try and sort it out. Should I just run the process again and hope for the best? Will this duplicate the data? Why are rebuilds so slow?&lt;/P&gt;</description>
      <pubDate>Tue, 29 Sep 2020 11:43:41 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/Attempt-to-thaw-data-slow-and-ended-unexpectedly/m-p/215534#M21612</guid>
      <dc:creator>michael_sleep</dc:creator>
      <dc:date>2020-09-29T11:43:41Z</dc:date>
    </item>
    <item>
      <title>Re: Attempt to thaw data slow and ended unexpectedly</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/Attempt-to-thaw-data-slow-and-ended-unexpectedly/m-p/215535#M21613</link>
      <description>&lt;P&gt;The solution we used for this was actually to break it up into several Powershell scripts (5-10) and run them all concurrently. This didn't seem to impact performance on the indexer at all and each script ran at the same speed. So I guess if you need to thaw a few thousand buckets, do it in several concurrent scripts, unless you want to leave something running for several days.&lt;/P&gt;</description>
      <pubDate>Thu, 10 Nov 2016 15:16:56 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/Attempt-to-thaw-data-slow-and-ended-unexpectedly/m-p/215535#M21613</guid>
      <dc:creator>michael_sleep</dc:creator>
      <dc:date>2016-11-10T15:16:56Z</dc:date>
    </item>
  </channel>
</rss>

