Or you can look at the metadata to see when a host last sent some data. The example below lists hosts that have not sent data in the last day (86400 seconds). Should be significantly quicker than searching through the metrics logs.
| metadata type=hosts |where recentTime < now() - 86400 | eval lastSeen = strftime(recentTime, "%F %T") | fields + host lastSeen
All the hosts (whether they are sending data or not) send heartbeat to indexer in _internal index. you can query that to identify if a host is down or not.
index=_internal source=*metrics.log group=tcpin_connections earliest=-7d@d
| eval sourceHost=coalesce(hostname, sourceHost)
| eval age = (now() - _time )
|stats first(age) as age, first(_time) as LastTime by sourceHost
| convert ctime(LastTime) as "Last Active On"
| eval Status= case(age < XXX,"Running",age > XXX,"DOWN")
Where XXX=duration in second for which is their are no heartbeat from host, the host is down. Typically is can be 2-3 min (120 or 180)
Did it show the host before you put it down (with same search)? The query depends on existence of fields hostname OR sourceHost from the events from that host (its used in stats so if either of the field is null they won't show up.
I know this is a bit dated, but I was interested in finding hosts that "suddenly stop reporting to splunk" and I found this answer.
When I run this search everything looks fine, and it makes sense. But I decided to test this by issuing a stop command on one of my forwarding agents. That device no longer shows up in the list at all instead of showing up with a "down" status.
Can anyone take a stab at why that would happen? (I haven't altered the search except to add seconds where there are XXX's)