As @jayannah reports, there is no good way to get the forwarder details without access to the
_internal index. However, you can do this:
Here is a possibility for the search
| metadata type=hosts index=* | eval latestEventSeen=strftime(lastTime,"%x %X") | fields host latestEventSeen
This search will be very fast and doesn't even use the
_internal index. However, the forwarder could be connected and just not have any recent data to send - in this report it might look like it wasn't responding. The next search uses
_internal to look for a connection from the forwarder even if no user-viewable events are being sent.
index=_internal source=*metrics.log group=tcpin_connections | eval sourceHost=if(isnull(hostname), sourceHost,hostname) | rename connectionType as connectType | eval connectType=case(fwdType=="uf","univ fwder", fwdType=="lwf", "lightwt fwder",fwdType=="full", "heavy fwder", connectType=="cooked" or connectType=="cookedSSL","Splunk fwder", connectType=="raw" or connectType=="rawSSL","legacy fwder") | eval version=if(isnull(version),"pre 4.2",version) | rename version as Ver arch as MachType | fields connectType sourceIp sourceHost destPort kb tcp_eps tcp_Kprocessed tcp_KBps splunk_server Ver MachType os | eval Indexer= splunk_server | eval Hour=relative_time(_time,"@h") | stats sum(kb) as total_KB by Hour connectType Ver sourceIp sourceHost MachType os Indexer destPort | fieldformat Hour=strftime(Hour,"%x %X") | fieldformat total_KB=tostring(total_KB,"commas") | rename os as OS
I like this search and I use variants of it a lot - but it shows only the forwarders that connected during the time range of the search. What if a host used to be reporting but stopped yesterday? The best way to deal with this is to create a lookup table that contains a list of "expected hosts." For example,
hostname,ip,owner db_server1,10.0.1.207,FinanceLTD web037,192.168.7.27,EngineeringDiv
The extra fields don't really matter, but they can give you good info... Set this up as a lookup (Add field lookups link)
Now, with the lookup, you can identify missing forwarders:
| inputlookup host_lookup | join hostname [ search index=_internal source=*metrics.log group=tcpin_connections | stats latest(_time) as mostRecentConnection by hostname | eval mostRecentConnection=strftime(mostRecentConnection,"%x %X") ] | fillnull "- missing -" mostRecentConnection | table hostname owner mostRecentConnection
This will report on all forwarders in the lookup table, indicating if they have been heard from during the reporting period. It should run decently fast.
Thanks. Some good info. Still, be nice for Splunk to just provide a heartbeat log, which can be made available to customers (non-internal) and no license cost. I see more and more customers trying to spin up Splunk as a service, and having this data available is important.
If you are using Splunk version 6.2, "Deployment Monitor" App (comes default with installation/upgrade to 6.2) given forwarder information. If you are using pre-6.2 release in your perf environment, then install 6.2 version on your test environment, goto "Deployment Monitor" app and get the search queries populating the forwarder information in the dashboard.
Also there is app called "Data Onboarding" https://apps.splunk.com/app/1769/#/overview , install and see if you can find any useful information for you. From the dashboard you can get the search query and then you can customize and use it accordingly.
E.g: below query from "Data Onboarding" app dashboard give information about all forwarders
index="_internal" source="metrics.lo" group=tcpin_connections | eval sourceHost=if(isnull(hostname), sourceHost,hostname) | eval connectionType=case(fwdType=="uf","universal forwarder", fwdType=="lwf", "lightweight forwarder",fwdType=="full", "heavy forwarder", connectionType=="cooked" or connectionType=="cookedSSL","Splunk forwarder", connectionType=="raw" or connectionType=="rawSSL","syslog")| eval build=if(isnull(build),"n/a",build) | eval version=if(isnull(version),"pre 4.2",version) | eval guid=if(isnull(guid),sourceHost,guid) | eval os=if(isnull(os),"n/a",os)| eval arch=if(isnull(arch),"n/a",arch) | fields connectionType sourceIp sourceHost sourcePort destPort kb tcp_eps tcp_Kprocessed tcp_KBps splunk_server build version os arch guid
I don't think there is way to know the forwarder details without _internal index.
The work around solution can be
1. As admin user ,schedule a search query on _internal to write forwarder status details to new index (like summary index) and configure the new index accessible for all customers.
2. So the customer can use the new index to know the forwarder details.
let me know if this addresses your queries .
Thanks. Looking at something like this:
index=_internal host=* component=TcpOutputProc "INFO TcpOutputProc - Connected to idx" | stats count by host
It looks like this shows that the connection is made to an indexer. Question is - how often does the "INFO TcpOutputProc - Connected to idx" message appear? If it's a low-volume feed, will this message still appear on a regular basis?
Try to put this query and add it to a dashboard OR have them run the query.
| metadata type=hosts index=_internal | search NOT (host="YourSplunkServers" ..OR ....Add more conditions to remove your splunk instances like search head, indexers, deployment servers etc) | table host, recentTime | eval age=now()-recentTime | eval status=if(age<300,"UP","DOWN") | table host, status
Just replace "
index=_internal" with "
index=*" if no access to _internal indexes (assuming forwarders are sending data frequently)
You can set up an alert for the below query as you'l always have internal logs coming in from the forwarder
index=_internal host=myhost | stats count
Schedule an alert for the above query and the alert condition where the results = 0, which means that the forwarder is in trouble.