We have Replication factor as 2 and search factor as 2 in 2 different sites in clustered environment.
For an index with 11 GB of license consumption per day, it consumed 40 Gb of disk space. I just want to know, what could be the relation for this? What are the attributes that affects the disk space on the indexer? Thanks in advance.
It all depends on what you are doing with the 11GB you are indexing. If it is "classic" log files (syslog) you are looking at about 15% for raw data and 35% for index files (=50%). With a rf:2 and sf:2 you have two copies of each so you will use 11GB.
If you data is very verbose (e.g., json data) and you are using indexed extrations you can look at upwards of 150% of ingest for storage x 2 = 300% (33GB).
Any other processing you do (summary indexing, reporting, etc.) will take additional space.
Thank you @hunderliggur. Your explanation fits a little to my environment as we use many indexed extractions and tstats. As we have many reports based on tstats, what could be better solution to optimize my disk space? Thanks in advance.
The only quick solution would be to change your searchable copies to 1. However, if you have a node failure you will have a processing delay as explained here https://docs.splunk.com/Documentation/Splunk/7.3.1/Indexer/Thesearchfactor
Otherwise, you need to quantify where your space is being used. Is it index buckets and metadata, dispatch files, job history, etc?
The 11GB reported by your license consumption includes only the amount of data that was actually indexed. Replication does not count against your license.
The 40GB would reflect indexed data, replicated data, meta files, logs, etc.
Replication factor would be the biggest attribute affecting disk consumption, but you may also want to look at compression, if you do not already have it enabled.