Recently noticed some universal forwarders hang and not sending logs to indexer. So, how I can monitor my Splunk universal forwarder sending logs to make sure the forwarder is working as expected. I have
index="_internal", but is there any search to help me to create a dashboard or alert?
you have two choices:
use the alert "DMC Alert - Missing forwarders" that you can find in Distributed Management Console or create your own alert.
You have to build a lookup with all the forwarders in your perimeter (e.g.: perimeter.csv) and run a search like this:
| inputlookup perimeter.csv | eval count=0, myfield=lower(host) | append [ search index=_internal | eval host=lower(host) | stats count by host ] | stats sum(count) AS total by host | where total=0
Using this query you can run an alert (e.g. every 5 minutes) or, adding a rangemap command you can visualize the situation in a graphic panel.
thanks Giuseppe for the response. I was looking something if can get out from index=_internal without adding anything 🙂 but this case I need to modify 1000 agents. that bit harder to do on short term. Appreciate your help tough.
Hi Hi raindrop18,
it isn't a problem to manage a lookup: you can create a scheduled search (to run e.g. every night) and put the output in a lookup using the outputlookup command.
This is the approach used by DMC: lookup is updated every 15 minutes.