How we can get the health status of the HF,UF and IHF which are connected to DS while using the rest am able to see the health for the MC ,CM, LM,DS, Deployer and IDX etc but not able to get the status health which is in Red Yellow green and not getting .
Rest which am using is - | rest /services/server/health on MC am able to see health status of MC ,CM, LM,DS, Deployer and IDX but not for forwarders also while am running the same query opening any of the HF U.I am able to see there health results.
Hi @Praz_123
As described by @PickleRick and @isoutamo - it can sometimes be possible to add these to MC but not always practical, and a bit hacky!
If you are wanting a high level view of a forwarder then you can use the health.log using the following SPL
index=_internal host=yourFowarderHost source="*/var/log/splunk/health.log" | stats latest(color) as color by feature, node_path, node_type, host
If you have a number of forwarders to monitor then you could adapt this to score the colours and show the worst?
🌟 Did this answer help you? If so, please consider:
Your feedback encourages the volunteers in this community to continue contributing
@livehybrid , @PickleRick , @isoutamo
I need the health status for HF while running the query. There are more than 5 HFs, and when I run the query for each HF individually, I get the results. However, I can't create a single alert that covers all HFs and —doing so would result in more than 5 separate alerts, one for each HF.
If am running the same query in LM and able to see all components status in a one go can't it be possible for the HF and IHF
Hi @Praz_123
To access the HF via REST you need to make sure they are setup in MC but also be able to reach their REST endpoints.
If you just want to see the health by host then you can try the following which will report hosts with red health checks:
index=_internal host=* source="*/var/log/splunk/health.log" | stats latest(color) as color by feature, node_path, node_type, host
| stats values(node_path) by color host node_type
| where color="red"
🌟 Did this answer help you? If so, please consider:
Your feedback encourages the volunteers in this community to continue contributing
You can use the splunk_server_group argument for the rest command to dispatch it to defined group of servers. See https://docs.splunk.com/Documentation/Splunk/latest/DistSearch/Distributedsearchgroups
But the user running the search must have the dispatch_to_indexers (or however it is called) capability.
MC doesn't normally directly monitor forwarders. It can do indirect monitoring by checking their logs in _internal index.
Sometimes people add HFs to MC with indexer role but AFAIR it causes false alerts since HFs don't actually do indexing.
As @PickleRick said many of us add those as an indexer into MC. I also add several additional custom groups to those. This helps me to avoid those false alerts and getting real status and statistics from indexers by selecting correct group on dashboards. There is idea on ideas.splunk.com to add own role for HF in MC. https://ideas.splunk.com/ideas/EID-I-73 This seems to be a future prospect, so maybe we finally get this into MC.
Currently UFs don’t listen REST api by default from network. I haven’t tried to enable it and try to query those as I haven’t seen any benefits for it. You can see those enough well in forwarder management page. Another reason is that those doesn’t collect some introspection metrics by default and some cannot collect w/o adding separate TAs into those.
To add a bit of additional context to what's already been said - actually while most of the "other" Splunk components should be able to communicate with each other (or at least should be able to be able), forwarders are often (usually) in remote sites and environments which are completely separate from the "main" Splunk infrastructure so in many cases querying them directly doesn't make much sense.
So yes, for _some_ HFs a separate role could be beneficial but there can be many HFs (and most UFs) which you should simply have no access to.
And that's also why app management with DS works in pull mode - you serve your apps from the DS but it's the deployment clients (usually forwarders) which pull their apps from DS and you have no way of forcing them to do so. They have their interval with which they "phone home" and that's it.