Any help will be appreciated, i trying from long back how to check missing forwarders.
Hi Rocky31,
In the Splunk Distributed Management Console there is an alarm for the missed forwarders to enable.
Otherwise you could creste a lookup (e.g. called perimeter.csv with a field called host) with the list of all forwarders to check and schedule an alarm with this search (e.g. every five minutes):
index=_internal
| eval host=upper(host)
| stats count by host
| append [ | inputlookup perimeter.csv | eval host=upper(host), count=0 | fields host count ]
| stats sum(count) AS Total by host
| where Total=0
In this way you have all the hosts of your lookup that didn't send logs to the Indexers in the period, so you can enable an alarm that sends an email or different actions.
Using the same search (without the last low) you can build a dashboard that shows the status of your forwarders that you can also display in graphic mode.
Bye.
Giuseppe
Please check these search queries -
The following search works in 3.4.5 and finds all hosts who haven't sent a message in the last 24 hours
| metadata type=hosts | eval age = strftime("%s","now") - lastTime | search age > 86400 | sort age d | convert ctime(lastTime) | fields age,host,lastTime
and in 4.0:
| metadata type=hosts | eval age = now() - lastTime | search age > 86400 | sort age d | convert ctime(lastTime) | fields age,host,lastTime
Another 4.0 variant
| metadata type=hosts | sort recentTime desc | convert ctime(recentTime) as Recent_Time
Hi Rocky31,
In the Splunk Distributed Management Console there is an alarm for the missed forwarders to enable.
Otherwise you could creste a lookup (e.g. called perimeter.csv with a field called host) with the list of all forwarders to check and schedule an alarm with this search (e.g. every five minutes):
index=_internal
| eval host=upper(host)
| stats count by host
| append [ | inputlookup perimeter.csv | eval host=upper(host), count=0 | fields host count ]
| stats sum(count) AS Total by host
| where Total=0
In this way you have all the hosts of your lookup that didn't send logs to the Indexers in the period, so you can enable an alarm that sends an email or different actions.
Using the same search (without the last low) you can build a dashboard that shows the status of your forwarders that you can also display in graphic mode.
Bye.
Giuseppe
What exactly you want to check? Forwarder not sending data?
i want to check whether it is up and running or sending data, just like i want to diagnose
Do you've DMC setup in your instance? It does have dashboards for forwarder monitoring.
http://docs.splunk.com/Documentation/Splunk/6.6.1/DMC/ForwardersDeployment
Other option is, unless you disable, the forwarder will send it's internal log events, so you can check whether you're receiving that. No event means Fwd not running OR not sending data.
index=_internal sourcetype=splunkd host=YourFwdName earliest=-15m
Another option is, if you're using deployment server to configure apps on your forwarders, you can check the phonehome events. See this for search for phonehome events
https://answers.splunk.com/answers/208607/how-to-determine-if-forwarder-is-phoning-home-to-d.html