Getting Data In

How do you know if a forwarder isn't forwarding

mfalk
Engager

What's a best practice way to determine if a forwarder isn't forwarding?

We have a setup of about 100 hosts all forwarding to a single indexer. How can I be sure that one of the forwarders hasn't stopped forwarding for some reason? I can think of a couple of options like running a saved search and checking if the count of events = 0.

What is everyone else doing?

1 Solution

mfalk
Engager

Looks good. I'll check out the monitor app.

In the short term I've come up with:

| metadata type=hosts | search host=#HostICareAbout# OR host=#HostICareAbout# | eval mytime=strftime (recentTime, "%y-%m-%d %H:%M:%S") | eval currentTime=strftime(now(), "%y-%m-%d %H:%M:%S") | eval minutesAgo=round(((now()-recentTime)/60),0)  | table host,lastTime,recentTime,mytime,currentTime,minutesAgo | where (abs(minutesAgo) < 60)

This query will return a list of hosts that I care about which haven't sent any events within the last 60 minutes (the abs if for detecting when we have hosts in other TimeZones not properly configured). We're thinking of adding a local splunk metric file to be monitored so in case a system just doesn't have anything to forward it'll still forward an entry.

We're trying to figure out a simple file to monitor that won't impact our indexing volume.

View solution in original post

0 Karma

mfalk
Engager

Looks good. I'll check out the monitor app.

In the short term I've come up with:

| metadata type=hosts | search host=#HostICareAbout# OR host=#HostICareAbout# | eval mytime=strftime (recentTime, "%y-%m-%d %H:%M:%S") | eval currentTime=strftime(now(), "%y-%m-%d %H:%M:%S") | eval minutesAgo=round(((now()-recentTime)/60),0)  | table host,lastTime,recentTime,mytime,currentTime,minutesAgo | where (abs(minutesAgo) < 60)

This query will return a list of hosts that I care about which haven't sent any events within the last 60 minutes (the abs if for detecting when we have hosts in other TimeZones not properly configured). We're thinking of adding a local splunk metric file to be monitored so in case a system just doesn't have anything to forward it'll still forward an entry.

We're trying to figure out a simple file to monitor that won't impact our indexing volume.

0 Karma

Damien_Dallimor
Ultra Champion

Use the Splunk Deployment Monitor App.

Refer to this other post I recently answered.

Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.

Can’t make it to .conf25? Join us online!

Get Updates on the Splunk Community!

Splunk Lantern’s Guide to The Most Popular .conf25 Sessions

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...

Unlock What’s Next: The Splunk Cloud Platform at .conf25

In just a few days, Boston will be buzzing as the Splunk team and thousands of community members come together ...

Index This | How many sevens are there between 1 and 100?

August 2025 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with this ...