<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Question about compression in Knowledge Management</title>
    <link>https://community.splunk.com/t5/Knowledge-Management/Question-about-compression/m-p/98965#M7385</link>
    <description>&lt;P&gt;Hello&lt;/P&gt;

&lt;P&gt;Would be more like the second option. As the data arrives, it is compressed, and then rolls to the other buckets states with the same compression rate.&lt;/P&gt;

&lt;P&gt;regards&lt;/P&gt;</description>
    <pubDate>Tue, 15 Oct 2013 08:55:44 GMT</pubDate>
    <dc:creator>gfuente</dc:creator>
    <dc:date>2013-10-15T08:55:44Z</dc:date>
    <item>
      <title>Question about compression</title>
      <link>https://community.splunk.com/t5/Knowledge-Management/Question-about-compression/m-p/98962#M7382</link>
      <description>&lt;P&gt;Hi!&lt;/P&gt;

&lt;P&gt;I would like to quickly confirm about the compression.&lt;/P&gt;

&lt;P&gt;In the document , compression is roughly designed here.&lt;/P&gt;

&lt;P&gt;&lt;A href="http://docs.splunk.com/Documentation/Splunk/5.0.5/Indexer/Systemrequirements#Storage_considerations"&gt;http://docs.splunk.com/Documentation/Splunk/5.0.5/Indexer/Systemrequirements#Storage_considerations&lt;/A&gt;&lt;/P&gt;

&lt;P&gt;Is it correct to understand the 50% is applied when it is stored into the Hot Bucket?&lt;/P&gt;

&lt;P&gt;Thanks,&lt;BR /&gt;
yu&lt;/P&gt;</description>
      <pubDate>Tue, 15 Oct 2013 07:15:21 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Knowledge-Management/Question-about-compression/m-p/98962#M7382</guid>
      <dc:creator>yuwtennis</dc:creator>
      <dc:date>2013-10-15T07:15:21Z</dc:date>
    </item>
    <item>
      <title>Re: Question about compression</title>
      <link>https://community.splunk.com/t5/Knowledge-Management/Question-about-compression/m-p/98963#M7383</link>
      <description>&lt;P&gt;Hello&lt;/P&gt;

&lt;P&gt;All the buckets types (hot, warm, cold) should have the same compression ratio. As when the data is indexed, the index itself should be around 35% of the original data and the compressed data an additional 15%, that sums 50%. And this applies to any kind of bucket.&lt;/P&gt;

&lt;P&gt;Regards&lt;/P&gt;</description>
      <pubDate>Tue, 15 Oct 2013 08:14:37 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Knowledge-Management/Question-about-compression/m-p/98963#M7383</guid>
      <dc:creator>gfuente</dc:creator>
      <dc:date>2013-10-15T08:14:37Z</dc:date>
    </item>
    <item>
      <title>Re: Question about compression</title>
      <link>https://community.splunk.com/t5/Knowledge-Management/Question-about-compression/m-p/98964#M7384</link>
      <description>&lt;P&gt;Hi gfuente!&lt;/P&gt;

&lt;P&gt;Thank you for the reply.&lt;/P&gt;

&lt;P&gt;I was trying to mention, is the compression rate would be following?&lt;/P&gt;

&lt;P&gt;Original data : 1   GB&lt;BR /&gt;
Hot           : 500 MB&lt;BR /&gt;
Warm          : 250 MB&lt;BR /&gt;
Cold          : 125 MB&lt;/P&gt;

&lt;P&gt;or&lt;/P&gt;

&lt;P&gt;Original data : 1   GB&lt;BR /&gt;
Hot           : 500 MB Only the compression rate applies to first stage&lt;BR /&gt;
Warm          : 500 MB&lt;BR /&gt;
Cold          : 500 MB&lt;/P&gt;

&lt;P&gt;Thanks,&lt;BR /&gt;
Yu&lt;/P&gt;</description>
      <pubDate>Tue, 15 Oct 2013 08:39:08 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Knowledge-Management/Question-about-compression/m-p/98964#M7384</guid>
      <dc:creator>yuwtennis</dc:creator>
      <dc:date>2013-10-15T08:39:08Z</dc:date>
    </item>
    <item>
      <title>Re: Question about compression</title>
      <link>https://community.splunk.com/t5/Knowledge-Management/Question-about-compression/m-p/98965#M7385</link>
      <description>&lt;P&gt;Hello&lt;/P&gt;

&lt;P&gt;Would be more like the second option. As the data arrives, it is compressed, and then rolls to the other buckets states with the same compression rate.&lt;/P&gt;

&lt;P&gt;regards&lt;/P&gt;</description>
      <pubDate>Tue, 15 Oct 2013 08:55:44 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Knowledge-Management/Question-about-compression/m-p/98965#M7385</guid>
      <dc:creator>gfuente</dc:creator>
      <dc:date>2013-10-15T08:55:44Z</dc:date>
    </item>
    <item>
      <title>Re: Question about compression</title>
      <link>https://community.splunk.com/t5/Knowledge-Management/Question-about-compression/m-p/98966#M7386</link>
      <description>&lt;P&gt;In addition to gfuente's answer, see this one &lt;A href="http://answers.splunk.com/answers/57248/compression-rate-of-indexed-data-50gigday-in-3-weeks-uses-100gig-hdd-space"&gt;http://answers.splunk.com/answers/57248/compression-rate-of-indexed-data-50gigday-in-3-weeks-uses-100gig-hdd-space&lt;/A&gt; if you're interested in how to get the real compression rate of your indexed data.&lt;/P&gt;</description>
      <pubDate>Tue, 15 Oct 2013 10:44:57 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Knowledge-Management/Question-about-compression/m-p/98966#M7386</guid>
      <dc:creator>MuS</dc:creator>
      <dc:date>2013-10-15T10:44:57Z</dc:date>
    </item>
    <item>
      <title>Re: Question about compression</title>
      <link>https://community.splunk.com/t5/Knowledge-Management/Question-about-compression/m-p/98967#M7387</link>
      <description>&lt;P&gt;The Fire Brigade application has the calculation for "actual" compression built-in.&lt;/P&gt;</description>
      <pubDate>Tue, 15 Oct 2013 13:48:07 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Knowledge-Management/Question-about-compression/m-p/98967#M7387</guid>
      <dc:creator>sowings</dc:creator>
      <dc:date>2013-10-15T13:48:07Z</dc:date>
    </item>
    <item>
      <title>Re: Question about compression</title>
      <link>https://community.splunk.com/t5/Knowledge-Management/Question-about-compression/m-p/98968#M7388</link>
      <description>&lt;P&gt;and here is the link to it &lt;A href="http://apps.splunk.com/app/1581/"&gt;http://apps.splunk.com/app/1581/&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 15 Oct 2013 13:51:02 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Knowledge-Management/Question-about-compression/m-p/98968#M7388</guid>
      <dc:creator>MuS</dc:creator>
      <dc:date>2013-10-15T13:51:02Z</dc:date>
    </item>
    <item>
      <title>Re: Question about compression</title>
      <link>https://community.splunk.com/t5/Knowledge-Management/Question-about-compression/m-p/98969#M7389</link>
      <description>&lt;P&gt;Just so you know, compression may not always work out in your favor. If you are dealing with highly structured, dense, variable data you may encounter situations where the "compressed" data is significantly larger than the raw data. In our case, we end up with data which is about 114-140% the original size because of the size of our index files. We are consuming CSV files with 300+ fields. The best way to tell is use fire brigade and see what the data turns into.&lt;/P&gt;</description>
      <pubDate>Tue, 15 Oct 2013 14:00:02 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Knowledge-Management/Question-about-compression/m-p/98969#M7389</guid>
      <dc:creator>msarro</dc:creator>
      <dc:date>2013-10-15T14:00:02Z</dc:date>
    </item>
    <item>
      <title>Re: Question about compression</title>
      <link>https://community.splunk.com/t5/Knowledge-Management/Question-about-compression/m-p/98970#M7390</link>
      <description>&lt;P&gt;Check this answer, if you want to test your compression rate.&lt;/P&gt;

&lt;P&gt;&lt;A href="http://answers.splunk.com/answers/52075/compression-rate-for-indexes-hot-warm-cold-frozen"&gt;http://answers.splunk.com/answers/52075/compression-rate-for-indexes-hot-warm-cold-frozen&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 15 Oct 2013 15:27:33 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Knowledge-Management/Question-about-compression/m-p/98970#M7390</guid>
      <dc:creator>yannK</dc:creator>
      <dc:date>2013-10-15T15:27:33Z</dc:date>
    </item>
    <item>
      <title>Re: Question about compression</title>
      <link>https://community.splunk.com/t5/Knowledge-Management/Question-about-compression/m-p/98971#M7391</link>
      <description>&lt;P&gt;Hi sowings and MuS.&lt;/P&gt;

&lt;P&gt;Thanks for the introduction.&lt;/P&gt;

&lt;P&gt;I am working this Fire Brigade on a distributed environment but getting this error on the index server.&lt;/P&gt;

&lt;P&gt;[map]: Could not find an index named "$summary$". err='index=$summary$ Could not load configuration'&lt;/P&gt;

&lt;P&gt;Have you ever experienced this?&lt;/P&gt;

&lt;P&gt;Thanks,&lt;BR /&gt;
Yu&lt;/P&gt;</description>
      <pubDate>Wed, 16 Oct 2013 07:54:09 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Knowledge-Management/Question-about-compression/m-p/98971#M7391</guid>
      <dc:creator>yuwtennis</dc:creator>
      <dc:date>2013-10-16T07:54:09Z</dc:date>
    </item>
    <item>
      <title>Re: Question about compression</title>
      <link>https://community.splunk.com/t5/Knowledge-Management/Question-about-compression/m-p/98972#M7392</link>
      <description>&lt;P&gt;I figured it out.&lt;/P&gt;

&lt;P&gt;There was two '$' marks in the search DB Inspect.&lt;/P&gt;

&lt;P&gt;Once I have delete a $ mark on each side , it worked.&lt;/P&gt;</description>
      <pubDate>Wed, 16 Oct 2013 09:51:44 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Knowledge-Management/Question-about-compression/m-p/98972#M7392</guid>
      <dc:creator>yuwtennis</dc:creator>
      <dc:date>2013-10-16T09:51:44Z</dc:date>
    </item>
  </channel>
</rss>

