We have a large number of Forwarders and would like to optimize the metrics data sent from them to the internal index.
The main goal is to have the a reasonable size of the index and still have enough data to search.
Is there a way to aggregate increase the sampling rate ?
There is a setting in limits.conf
[metrics]
interval = 30
masxeries = 10
increasing the pooling interval between samples from 30 seconds to lets say to 90 would decrease sampling and save some storage, right?
thansk for any hint.
If you're certain the data you want to change is in _internal then use limits.conf. IME, customers get a ton of data from perfmon inputs and that is configured in inputs.conf. It comes down to whether to refer to "metrics" or "Metrics".
The setting you want to change is indeed called "interval", but it's in inputs.conf. You'll need to change the setting for each perfmon stanza. Yes, changing from 30 to 90 seconds will decrease sampling and save storage.
hi! the documentation in https://docs.splunk.com/Documentation/Splunk/8.2.2/Admin/Limitsconf
specifies that the interval for metrics.log for _internal index is specified in the limits.conf section.
or I'm i reading it wrong?
ty
[metrics]
interval = <integer> * Number of seconds between logging splunkd metrics to metrics.log. * Minimum of 10. * Default: 30 maxseries = <integer> * The number of series to include in the per_x_thruput reports in metrics.log. * Default: 10
If you're certain the data you want to change is in _internal then use limits.conf. IME, customers get a ton of data from perfmon inputs and that is configured in inputs.conf. It comes down to whether to refer to "metrics" or "Metrics".