I think that the easiest way to get this is an activate MC's Alert: "DMC Alert - Abnormal State of Indexer Processor". Then it informs you e.g. with email that there is issue with indexer.
Thank you for the quick responses, I will certainly check on this and let you know. But it is a shame there is no better/obvious ways to do this in Splunk, given this is an important event.
If looking for Splunk disk space issue, you can use your monitoring console for space details
Splunk >> Setting >> Monitoring Console >> Indexing >> Volume Detail Instance
or try below mentioned SPL
| rest /services/server/status/partitions-space | eval usage = capacity - free | eval pct_usage = round(usage / capacity * 100, 2) | stats first(fs_type) as fs_type first(usage) as usage first(capacity) as capacity first(pct_usage) as pct_usage by splunk_server, mount_point
If you want to monitor Space issue in non-Splunk instances, try onboarding perfmon logs, onboarding details are on below mentioned Splunk answer
index=perfmon sourcetype="Perfmon:Free Disk Space" counter="Free Megabytes" (instance!="HarddiskVolume*") (instance!=_Total) |dedup host | eval FreeSpace=(Value/1024) | eval GB=tostring(FreeSpace,"commas") | table host instance GB | sort + host instance | rename instance as "Drive Letter" GB as "GigaBytes Free"
The cluster master dashboard shows status of indexers (via REST call). You can setup an alert to which can query the indexer status at an interval and notify you when the status changes to AutomaticDetention. See this for rest api endpoint details.
In search query you could do something like this (run on cluster master node)
| rest splunk_server=local /services/cluster/master/peers | table title label status | where status="AutomaticDetention"