Hi all,
I have been trying to identify a list of the current forwarders that are sending data to our single Splunk indexer. Is there a section within Splunk where I can find this or even a search query?
Thanks in advance.
Anu
Here is a search that I often use to check on how much data is being sent per hour, by forwarder.
index=_internal source=*metrics.log group=tcpin_connections
| eval sourceHost=if(isnull(hostname), sourceHost,hostname)
| rename connectionType as connectType
| eval connectType=case(fwdType=="uf","univ fwder", fwdType=="lwf", "lightwt fwder",fwdType=="full", "heavy fwder", connectType=="cooked" or connectType=="cookedSSL","Splunk fwder", connectType=="raw" or connectType=="rawSSL","legacy fwder")
| eval version=if(isnull(version),"pre 4.2",version)
| rename version as Ver
| fields connectType sourceIp sourceHost destPort kb tcp_eps tcp_Kprocessed tcp_KBps splunk_server Ver
| eval Indexer= splunk_server
| eval Hour=relative_time(_time,"@h")
| stats avg(tcp_KBps) sum(tcp_eps) sum(tcp_Kprocessed) sum(kb) by Hour connectType sourceIp sourceHost destPort Indexer Ver
| fieldformat Hour=strftime(Hour,"%x %H")
Just copy this search and paste into your search box - and pick a relatively short time period (like last 24 hours or less). It should run on any Splunk 4.2 or newer. It might work on older versions, but I am not sure...
You could change the stats
command if you wanted a slightly different output. For example, replace the last 3 lines with the following to get an overall summary by forwarder, rather than hour by hour statistics:
| stats avg(tcp_KBps) sum(tcp_eps) sum(tcp_Kprocessed) sum(kb) by connectType sourceIp sourceHost destPort Indexer Ver
I originally found this search as part of the Spunk Deployment Monitor. I've been tweaking it ever since.
Here is one with the OS versions too (WIn only)
index=_internal sourcetype=splunkd group=tcpin_connections | stats first(version) by hostname
|rename hostname as host
|join host [
search index=windows_desktop sourcetype=WinHostMon vendor_product="*"
]
|rename host as Hostname, vendor_product as OS, Version as "OS Version", first(version) as "Splunk Version"
|table Hostname, OS, "OS Version", "Splunk Version"
|sort "OS Version"
maybe it is better to use something else than vendor_product, so it is just a quick and dirty solution 🙂
Hello, i use this search :
index=_internal sourcetype=splunkd group=tcpin_connections | stats first(version) by hostname
Thank you. Gave me exactly what I needed.
try this:
index=_internal sourcetype=splunkd destPort!="-"| stats sparkline count by hostname, sourceHost, host, destPort, version | rename destPort as "Destination Port" | rename host as "Indexer" | rename sourceHost as "Forwarder IP" | rename version as "Splunk Forwarder Version" | rename hostname as "Forwarder Host Name" | rename sparkline as "Traffic Frequency" | sort - count
Hello,
You can use App like Deployment Monitor or S.O.S
Regards
Today, I would recommend that you use the Distributed Management Console (DMC). It is built-in and works very well.
But sometimes it is nice to have a search like one of these - which will let you look at the forwarders over any time span and allows you to set any criteria that you want. That's particularly useful if you have thousands of forwarders and are only interested in a subset of them.
I understand this is a very old post but I had the same requirements and i wanted ONLY the forwarders , i figured out this search works , so I thought I will post the answer , might be useful for someone else. It might a round about way of getting it but does the job.
| metadata type=hosts | fields host | rename host AS splunk_server | where ![| rest splunk_server=local /services/licenser/messages | fields splunk_server | dedup splunk_server | fields + splunk_server] | rename splunk_server AS host
The trouble with this, is that the hosts listed will be the name of the host specified in inputs.conf; that might or might not match the actual forwarder names.
By using the _internal index, you see the actual IP address and server name of the forwarder.
Here is a search that I often use to check on how much data is being sent per hour, by forwarder.
index=_internal source=*metrics.log group=tcpin_connections
| eval sourceHost=if(isnull(hostname), sourceHost,hostname)
| rename connectionType as connectType
| eval connectType=case(fwdType=="uf","univ fwder", fwdType=="lwf", "lightwt fwder",fwdType=="full", "heavy fwder", connectType=="cooked" or connectType=="cookedSSL","Splunk fwder", connectType=="raw" or connectType=="rawSSL","legacy fwder")
| eval version=if(isnull(version),"pre 4.2",version)
| rename version as Ver
| fields connectType sourceIp sourceHost destPort kb tcp_eps tcp_Kprocessed tcp_KBps splunk_server Ver
| eval Indexer= splunk_server
| eval Hour=relative_time(_time,"@h")
| stats avg(tcp_KBps) sum(tcp_eps) sum(tcp_Kprocessed) sum(kb) by Hour connectType sourceIp sourceHost destPort Indexer Ver
| fieldformat Hour=strftime(Hour,"%x %H")
Just copy this search and paste into your search box - and pick a relatively short time period (like last 24 hours or less). It should run on any Splunk 4.2 or newer. It might work on older versions, but I am not sure...
You could change the stats
command if you wanted a slightly different output. For example, replace the last 3 lines with the following to get an overall summary by forwarder, rather than hour by hour statistics:
| stats avg(tcp_KBps) sum(tcp_eps) sum(tcp_Kprocessed) sum(kb) by connectType sourceIp sourceHost destPort Indexer Ver
I originally found this search as part of the Spunk Deployment Monitor. I've been tweaking it ever since.
JoeIII - thanks for this, it was just what I needed to search for forwarders that needed to be upgraded. Just one typo - sourceIP should be sourceIp:
index=_internal source=*metrics.log group=tcpin_connections | eval sourceHost=if(isnull(hostname), sourceHost,hostname) | dedup sourceHost | table sourceHost sourceIp os version | sort version
I just wanted to thank you - I modified your search to help me find out of date forwarders:
index=_internal source=*metrics.log group=tcpin_connections | eval sourceHost=if(isnull(hostname), sourceHost,hostname) | dedup sourceHost | table sourceHost sourceIP os version | sort version
index=_internal source=*metrics.log group=tcpin_connections | eval sourceHost=if(isnull(hostname), sourceHost,hostname) | dedup sourceHost | table sourceHost sourceIP os version | sort version
Thanks for sharing this, but when we use this search string we get duplicates, where forwarders in the results list both hostnames and a duplicate records for each as IP address. So we have two results for each forwarder, one with hostname and another with just the IP. Also, the hostname column does not populate the IP address column.
I used... this search (above)... and added:
| stats count as vercount by version
As one of our execs wanted to know how many of each version we were running.
I would suggest a query to the metadata using the search
| metadata type="hosts"
Should list the various hosts delivering you events.
If you just want the splunk forwarders you can try the following shell command:
splunk cmd btool inputs list splunktcp
The trouble with this, is that the hosts listed will be the name of the host specified in inputs.conf; that might or might not match the actual forwarder names.
By using the _internal index, you see the actual IP address and server name of the forwarder.
If you are forwarding _internal indexes from the forwarders, then the data should all be in the _internal index on your indexer.
forwardedindex.filter.disable = true in outputs.conf would achieve this.
The deployment monitor app would then show you all forwarders out of the box.
Thanks for that input. I am not sure if we are forwarding _internal indexes (I'm fairly new to Splunk and am still learning my way around the software) from the forwarders but i will investigate and try it out.
Cheers
The universal forwarder does not have indexes. But it does forward its internal logs by default - so the effect is the same. And you don't need to do anything to get it.
If you are using a heavy forwarder, you will need to set it to forwarder rather than index. The following documentation is written for a search head - but the settings for a heavy forwarder will be exactly the same.
Best practice: Forward search head data into the indexing layer