We have indexer cluster deployment. We recently added additional memory to all the indexers.
In splunk monitoring console the new diskspace details are not updated.
In the rest URI path '/services/server/status/partitions-space' the correct diskspace details are not updated.
We are using below SPL to fetch diskspace details in monitoring console dashboard:
| rest splunk_server=* /services/server/status/partitions-space
| eval free = if(isnotnull(available), available, free)
| eval usage = round((capacity - free) / 1024, 2)
| eval capacity = round(capacity / 1024, 2)
| eval compare_usage = usage." / ".capacity
| eval pct_usage = round(usage / capacity * 100, 2)
| stats first(fs_type) as fs_type first(compare_usage) AS compare_usage first(pct_usage) as pct_usage by mount_point
| rename mount_point as "Mount Point", fs_type as "File System Type", compare_usage as "Disk Usage (GB)", pct_usage as "Disk Usage (%)"
When I run the "df" command in indexer server exact diskspace details are displayed as below. But when we run the above SPL, it is displaying the old values(2.5TB) which was before increasing, not 4TB as in indexer server .
df -h|grep splunk
4.0T 1.6T 2.4T 41% /opt/splunk
This may be a dumb question, but did you restart the indexers after adding storage?
Not sure why it seems to be dumb, but it is certainly taking our effort in understanding why the correct details are updated in rest uri path.
I think the indexer cluster was not restarted after diskpace expansion. It was before I joined the organization.
Does restating the indexer cluster helps in solving this and we would be able to see correct data in monitoring console?
When something is not working correctly, turning it off and on again often is the fix. Try restarting one indexer to see if it then starts reporting the right diskspace.
A change in memory would not affect the diskspace measurements.