Hi!
I would like to quickly confirm about the compression.
In the document , compression is roughly designed here.
http://docs.splunk.com/Documentation/Splunk/5.0.5/Indexer/Systemrequirements#Storage_considerations
Is it correct to understand the 50% is applied when it is stored into the Hot Bucket?
Thanks,
yu
The Fire Brigade application has the calculation for "actual" compression built-in.
Check this answer, if you want to test your compression rate.
http://answers.splunk.com/answers/52075/compression-rate-for-indexes-hot-warm-cold-frozen
The Fire Brigade application has the calculation for "actual" compression built-in.
I figured it out.
There was two '$' marks in the search DB Inspect.
Once I have delete a $ mark on each side , it worked.
Hi sowings and MuS.
Thanks for the introduction.
I am working this Fire Brigade on a distributed environment but getting this error on the index server.
[map]: Could not find an index named "$summary$". err='index=$summary$ Could not load configuration'
Have you ever experienced this?
Thanks,
Yu
Just so you know, compression may not always work out in your favor. If you are dealing with highly structured, dense, variable data you may encounter situations where the "compressed" data is significantly larger than the raw data. In our case, we end up with data which is about 114-140% the original size because of the size of our index files. We are consuming CSV files with 300+ fields. The best way to tell is use fire brigade and see what the data turns into.
and here is the link to it http://apps.splunk.com/app/1581/
Hello
All the buckets types (hot, warm, cold) should have the same compression ratio. As when the data is indexed, the index itself should be around 35% of the original data and the compressed data an additional 15%, that sums 50%. And this applies to any kind of bucket.
Regards
In addition to gfuente's answer, see this one http://answers.splunk.com/answers/57248/compression-rate-of-indexed-data-50gigday-in-3-weeks-uses-10... if you're interested in how to get the real compression rate of your indexed data.
Hello
Would be more like the second option. As the data arrives, it is compressed, and then rolls to the other buckets states with the same compression rate.
regards
Hi gfuente!
Thank you for the reply.
I was trying to mention, is the compression rate would be following?
Original data : 1 GB
Hot : 500 MB
Warm : 250 MB
Cold : 125 MB
or
Original data : 1 GB
Hot : 500 MB Only the compression rate applies to first stage
Warm : 500 MB
Cold : 500 MB
Thanks,
Yu