I'm looking to index and store a ton of data (syslog). My question is once splunk has index the data, and moved it to the various buckets, is there any depup, or compression that happens? Is there a document someplace that explains the process in more detail?
Thanks
Hello ihingos,
To answer your question Splunk does not dedup raw events and its does compress them; however, Splunk allows you to dedup events in the search query language( yoursearch | dedup _raw …). Depending on the cardinality of your data you can get fairly high compression ratios. Compress will also vary depending on Bucket and index sizes.
In general the formula is : ( Daily average indexing rate ) x ( retention policy ) x 1/2
Additional Reading:
Hello ihingos,
To answer your question Splunk does not dedup raw events and its does compress them; however, Splunk allows you to dedup events in the search query language( yoursearch | dedup _raw …). Depending on the cardinality of your data you can get fairly high compression ratios. Compress will also vary depending on Bucket and index sizes.
In general the formula is : ( Daily average indexing rate ) x ( retention policy ) x 1/2
Additional Reading: