Getting Data In

Why is there constant memory growth with Search Head and Universal Forwarder?

hrawat_splunk
Splunk Employee
Splunk Employee

Installed Universal forwarder and no inputs are added yet, still gradual memory growth.
Why there is constant memory growth with Universal Forwarder?
More importantly in the K8 cluster setting, every extra MB memory usage matters.

Applicable for all splunk instances except Indexers/Heavy forwarders.

Labels (1)
Tags (1)
0 Karma
1 Solution

hrawat_splunk
Splunk Employee
Splunk Employee

The reason for memory growth is auto tuning for max_inactive and lowater_inactive configurations in limits.conf.
With auto tuning, 
max_inactive = 96 ( if total system memory is < 8GB)
max_inactive = 1024  ( if total system memory is >= 8GB and < 26GB )
max_inactive = 32768  ( if total system memory is >= 26GB)
lowater_inactive = (max_inactive/3)

max_inactive = <integer>
* The Maximum number of inactive input channel configurations to keep in cache.
* Each source/sourcetype/host combination requires an independent input
  channel, which contains all relevant settings for ingestion.
* When set to 'auto', the Splunk platform will tune this setting based on the
  physical RAM present in the server at startup.
* Increasing this number might help with low ingestion throughput when there
  are no blocked queues (i.e., no 'blocked=true' events for 'group=queue' in
  metrics.log), and splunkd is creating a very high number of new input
  channels (see the value of 'new_channels' in
  'group=map, name=pipelineinputchannel', also in metrics.log), usually in the
  order of thousands. However, this action is only effective when those input
  channels could have been reused: for example, the source, sourcetype, and
  host fields are not generated randomly and tend to be reused within the
  lifetime of cached channel entries.
* Default: auto

lowater_inactive = <integer>
* Size of the inactive input channel cache after which entries will be
  considered for recycling: having its memory reused for storing settings
  for a different input channel.
* When set to 'auto', the Splunk platform will tune this setting value based
  on the value of 'max_inactive'.
* Default: auto

As a result Universal forwarder/Search Head is creating a minimum cache of inactive channels as per lowater_inactive configuration.
However these high settings are useful for only Indexer and Heavy forwarder. For edge Universal forwarder and search head these high values don't matter.

Workaround:
Set `max_inactive` as low as possible.

Example
[input_channels]
max_inactive=10


View solution in original post

PickleRick
SplunkTrust
SplunkTrust

Unless you deliberately disable inputs there are at least some internal splunk inputs enabled by default right after the installation. So splunk reads its own logs and wants to send them to the indexers, as configured in outputs.conf. You can't therefore say that you just installed the UF and didn't enable any inputs.

0 Karma

hrawat_splunk
Splunk Employee
Splunk Employee

The reason for memory growth is auto tuning for max_inactive and lowater_inactive configurations in limits.conf.
With auto tuning, 
max_inactive = 96 ( if total system memory is < 8GB)
max_inactive = 1024  ( if total system memory is >= 8GB and < 26GB )
max_inactive = 32768  ( if total system memory is >= 26GB)
lowater_inactive = (max_inactive/3)

max_inactive = <integer>
* The Maximum number of inactive input channel configurations to keep in cache.
* Each source/sourcetype/host combination requires an independent input
  channel, which contains all relevant settings for ingestion.
* When set to 'auto', the Splunk platform will tune this setting based on the
  physical RAM present in the server at startup.
* Increasing this number might help with low ingestion throughput when there
  are no blocked queues (i.e., no 'blocked=true' events for 'group=queue' in
  metrics.log), and splunkd is creating a very high number of new input
  channels (see the value of 'new_channels' in
  'group=map, name=pipelineinputchannel', also in metrics.log), usually in the
  order of thousands. However, this action is only effective when those input
  channels could have been reused: for example, the source, sourcetype, and
  host fields are not generated randomly and tend to be reused within the
  lifetime of cached channel entries.
* Default: auto

lowater_inactive = <integer>
* Size of the inactive input channel cache after which entries will be
  considered for recycling: having its memory reused for storing settings
  for a different input channel.
* When set to 'auto', the Splunk platform will tune this setting value based
  on the value of 'max_inactive'.
* Default: auto

As a result Universal forwarder/Search Head is creating a minimum cache of inactive channels as per lowater_inactive configuration.
However these high settings are useful for only Indexer and Heavy forwarder. For edge Universal forwarder and search head these high values don't matter.

Workaround:
Set `max_inactive` as low as possible.

Example
[input_channels]
max_inactive=10


jstratton
Engager

@PickleRick I believe the intent of the original post is memory usage of a newly installed Splunk UF that hasn't been configured with additional `inputs.conf` aside from Splunk's defaults.

@hrawat_splunk What, if any, drawbacks are there to setting `max_inactive = 10`? Splunk Support referred me to your post for a support case I filed that matches your original post.

0 Karma

hrawat_splunk
Splunk Employee
Splunk Employee

@jstratton no negative impact. You can also set max_inactive = 96 which was the default value for all splunk versions upto 7.3.6.

Higher max_inactive is needed only on receiving side.

hrawat_splunk
Splunk Employee
Splunk Employee

@jstratton  vm.overcommit_memory = 1 not ok. It must be vm.overcommit_memory = 0.

0 Karma

PickleRick
SplunkTrust
SplunkTrust

vm.overcommit_memory has nothing to do with whether the splunkd process leaks memory or not. It changes the kernel's memory management mechanisms and might cause it to trigger oom-killer earlier or later but it has nothing to do with the process itself.

0 Karma

hrawat_splunk
Splunk Employee
Splunk Employee

It's not a splunkd memory leak. It's how the memory allocated depending on overcommit setting.

0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...