Splunk Search

Indexer

SN1
Path Finder

I want to get total memory allocated on 1 indexer and how much memory it is using. so that i could get remaining disk space left.

Labels (1)
0 Karma

livehybrid
SplunkTrust
SplunkTrust

You can also use an mstats query to query to _metrics index:

 

| mstats latest(_value) as val WHERE index=_metrics AND metric_name=spl.intr.disk_objects.Partitions.data.* by data.mount_point, metric_name
| rename data.mount_point as mount_point
| eval metric_name=replace(metric_name,"spl.intr.disk_objects.Partitions.data.","")
| eval {metric_name}=val
| stats latest(*) as * by mount_point
| eval free = if(isnotnull(available), available, free) 
| eval usage = round((capacity - free) / 1024, 2) 
| eval capacity = round(capacity / 1024, 2) 
| eval compare_usage = usage." / ".capacity 
| eval pct_usage = round(usage / capacity * 100, 2) 
| stats first(compare_usage) AS compare_usage first(pct_usage) as pct_usage by mount_point 
| rename mount_point as "Mount Point", compare_usage as "Disk Usage (GB)", pct_usage as "Disk Usage (%)"

 

Please let me know how you get on and consider adding karma to this or any other answer if it has helped.
Regards

Will

0 Karma

livehybrid
SplunkTrust
SplunkTrust

Hi @SN1 

Some good answers here, its worth noting that for me 

| rest /services/server/status/partitions-space

doesnt give me the right data, and it can depend on how your partitions are configured (e.g. multiple partitions for hot/warm/cold etc)

If you're using Linux then its worth also checking something as simple as in the linux command line

df -h

This will list all the filesystems on the server and show you the size, used and available disk space.

I'd definitely recommend setting up some proper monitoring using the Splunk TA for *Nix to cover your servers and cover all partitions and filesystems.

Please let me know how you get on and consider adding karma to this or any other answer if it has helped.
Regards

Will

0 Karma

PickleRick
SplunkTrust
SplunkTrust

The difference between disk usage and memory has already been pointed out.

There is also one more thing worth noting - the disk utilization on indexers is usually managed by adjusting retention parameters (you might also get some additional usage from knowledge bundles and intermediate results but they are rarely very significant). And the memory usage can vary greatly depending on the current load at the time of checking since memory is used mostly for searching. So the more more complicated searches you're running at any given moment, the higher memory usage.

0 Karma

kiran_panchavat
SplunkTrust
SplunkTrust

@SN1 

Splunk indexers store data on disk in indexes, and the "total memory allocated" could refer to the total disk space available on the partition where Splunk stores its data (typically under $SPLUNK_HOME/var/lib/splunk). The "memory it is using" would then be the disk space consumed by the indexes, and the "remaining disk space left" would be the free space on that partition. 
 
| rest /services/server/status/partitions-space splunk_server=*
| eval totalGB = round(capacity/1024/1024, 2)
| eval freeGB = round(free/1024/1024, 2)
| eval usedGB = round((capacity - free)/1024/1024, 2)
| table splunk_server, totalGB, usedGB, freeGB

kiran_panchavat_1-1741676768504.png

To get the total memory allocated on an indexer and its current usage (which is different from disk space), you can use the following Splunk commands:

For memory information:

| rest /services/server/status/resource-usage/hostwide splunk_server=*
 
kiran_panchavat_2-1741677125898.png

 

This will show you key metrics including:

  • Total physical memory on the system
  • Memory currently in use
  • Available memory

 If you're specifically interested in Splunk's memory usage:

kiran_panchavat_3-1741677194105.png

For disk space information (which seems to be what you're actually asking about):

kiran_panchavat_4-1741677217694.png

For specific index volume usage:

kiran_panchavat_5-1741677292311.png

Note that memory usage and disk space are different resources. Memory refers to RAM available for processing, while disk space refers to storage capacity for data. Your question mentions memory but ends with disk space, so I've provided commands for both.

 
 
Did this help? If yes, please consider giving kudos, marking it as the solution, or commenting for clarification — your feedback keeps the community going!

SN1
Path Finder

when i am running this search it is giving 16GB as total_GB while our total size is 16Tb.

0 Karma

livehybrid
SplunkTrust
SplunkTrust

Hi @SN1 

This is because the values from the endpoint are in MB but are being divided by 1024 twice in this search hence they become in TB. 
try switching 1024/1024 for just 1024 in each occurrence and see if that resolves for you 🙂

Will

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @SN1 ,

in addition to the perfect answer of @kiran_panchavat ,

you could install the Splunk_TA_nix add-on ( https://splunkbase.splunk.com/app/833 ) and extract additional information from the linux system you're using.

Ciao.

Giuseppe

PickleRick
SplunkTrust
SplunkTrust

Well... TA_nix without careful tweaking what it reports can be a handful. It's just a bunch of ziptie and duct-tape connected scripts giving you some relatively unfriendly output. And if you just install it and enable all inputs, it can get noisy.

0 Karma
Get Updates on the Splunk Community!

Automatic Discovery Part 1: What is Automatic Discovery in Splunk Observability Cloud ...

If you’ve ever deployed a new database cluster, spun up a caching layer, or added a load balancer, you know it ...

Real-Time Fraud Detection: How Splunk Dashboards Protect Financial Institutions

Financial fraud isn't slowing down. If anything, it's getting more sophisticated. Account takeovers, credit ...

Splunk + ThousandEyes: Correlate frontend, app, and network data to troubleshoot ...

 Are you tired of troubleshooting delays caused by siloed frontend, application, and network data? We've got a ...