Monitoring Splunk

Why does the Distributed Management Console (DMC) consider cached memory to be used memory?

masonmorales
Influencer

Why does DMC consider cached memory to be used memory?

In DMC: Resource Usage: Deployment Page

Resource Usage by Instance
Instance             CPU Usage (%)  Physical Memory Capacity (MB)     Physical Memory Usage (MB)    Physical Memory Usage (%)
index01              1.61            64153                            60597                      94.46

From Linux CLI:

$ free -m
              total        used        free      shared  buff/cache   available
Mem:          64153        1678        3691        2668       58783       58876
Swap:             0           0           0

I thought that actual free memory = free + buffers + cached ?

1 Solution

hexx
Splunk Employee
Splunk Employee

The Distributed Management Console assesses system-wide memory capacity, usage & availability based on the following events recorded by our platform instrumentation feature to $SPLUNK_HOME/var/log/introspection/resource_usage.log:

{"datetime":"08-05-2015 10:35:25.099 -0700","log_level":"INFO","component":"Hostwide","data":{"mem":"64390.848","mem_used":"42311.441","swap":"65535.992","swap_used":"1049.684","pg_paged_out":"62742991728","pg_swapped_out":"0","forks":"703426238","runnable_process_count":"2","normalized_load_avg_1min":"0.01","cpu_user_pct":"0.68","cpu_system_pct":"0.59","cpu_idle_pct":"98.73"}}

As you point out, memory usage is read from the mem_used field and as of today it this value does indeed reflect the memory usage of processes and OS buffers and cache.

We have recently revisited this decision (internal item reference is SPL-104917) and starting with an upcoming 6.2.x release (looking like 6.2.6, as of today) and our next major release, mem_used will only report the memory usage of processes and will therefore be a better indicator of actual memory pressure observed on your server.

View solution in original post

hexx
Splunk Employee
Splunk Employee

The Distributed Management Console assesses system-wide memory capacity, usage & availability based on the following events recorded by our platform instrumentation feature to $SPLUNK_HOME/var/log/introspection/resource_usage.log:

{"datetime":"08-05-2015 10:35:25.099 -0700","log_level":"INFO","component":"Hostwide","data":{"mem":"64390.848","mem_used":"42311.441","swap":"65535.992","swap_used":"1049.684","pg_paged_out":"62742991728","pg_swapped_out":"0","forks":"703426238","runnable_process_count":"2","normalized_load_avg_1min":"0.01","cpu_user_pct":"0.68","cpu_system_pct":"0.59","cpu_idle_pct":"98.73"}}

As you point out, memory usage is read from the mem_used field and as of today it this value does indeed reflect the memory usage of processes and OS buffers and cache.

We have recently revisited this decision (internal item reference is SPL-104917) and starting with an upcoming 6.2.x release (looking like 6.2.6, as of today) and our next major release, mem_used will only report the memory usage of processes and will therefore be a better indicator of actual memory pressure observed on your server.

securediversity
Explorer

I encountered the same issue even on 6.6 (tested on 6.6.5 and 6.6.8)

Last detailed description of mem_used seems to be in 6.4.10:

http://docs.splunk.com/Documentation/Splunk/6.4.10/RESTREF/RESTintrospect#server.2Fstatus.2Fresource...

Starting from 6.5 only a general description is listed but I think that fix never has been included in splunk (yet).

.. and as we can see there it is still the buffer included. Would explain the behavior here on 6.6.8.
We have 1,5 GB of 64 GB in use - without buffers. As the buffers are used intensively (on Linux at least) the "used" amount is above 50GB - which fires alerts by the DMC.

Thats really bad practice to monitor the RAM including buffers.. Will that be fixed ever?

0 Karma

lacot5
Engager

The same case for 7.0.1 Enterprise:
Documentation is saying:
mem_used = total_phys_ram - (free_mem + buffer_mem + cached_mem)
http://docs.splunk.com/Documentation/Splunk/7.2.0/RESTREF/RESTintrospect#server.2Fstatus.2Fresource-...
But in reality, mem_used is ~106GB, ~80% of usage. We have 125GB total, 20GB free, 104GB cashed/buffer

tommoore
Path Finder

Same issue here, even on 7.2.6 now!

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...