Getting Data In

What is the compression ratio between the forwarders and indexers?

kreng
New Member

I need the approximate compression ratio of the data forwarded to indexers.

0 Karma

adonio
Ultra Champion

Hello there,
Splunk estimates an average of 50% compression:
read here in detail:
http://docs.splunk.com/Documentation/Splunk/6.6.2/Indexer/Systemrequirements
also the link above provided by @lfdedak is good
you can always check for yourself:
bring the data to splunk and then search the following:

 | dbinspect index=*
    | fields state,id,rawSize,sizeOnDiskMB,index
    | stats sum(rawSize) AS rawTotal, sum(sizeOnDiskMB) AS diskTotalinMB by index
    | eval rawTotalinMB=(rawTotal / 1024 / 1024) | fields - rawTotal
    | eval compression=tostring(round(100 - diskTotalinMB / rawTotalinMB * 100, 2)) + "%"

will suggest to ignore indexes with very little data as this takes into calculation metadata files within the index and therefore you might see a huge negative compression on these tiny indexes.
hope it helps

0 Karma

lfedak_splunk
Splunk Employee
Splunk Employee

Hey @kreng, I saw this similar post and thought it might help answer your question: https://answers.splunk.com/answers/63384/what-kind-of-compression-is-used-between-forwarders-and-ind...

0 Karma

jkat54
SplunkTrust
SplunkTrust

To add to the discussion here, For ssl compression we generally go with 13 to 1.

0 Karma
Get Updates on the Splunk Community!

Introducing Splunk Enterprise 9.2

WATCH HERE! Watch this Tech Talk to learn about the latest features and enhancements shipped in the new Splunk ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...