Getting Data In

how to provide a status of a forwarder to a customer



I have some customers who do not have access to their servers and would like the ability to validate that the forwarder is running. Is there a way to do this?

Tags (2)
0 Karma


As @jayannah reports, there is no good way to get the forwarder details without access to the _internal index. However, you can do this:

  • create a search against the _internal index that summarizes the data in the way that you want to display it
  • save the search as a scheduled search (perhaps run it once per hour)
  • set the search up as a "embedded search" - this creates a link that you can give to your users - they can see the latest search results via the link even if they have no access to the underlying data

Here is a possibility for the search

| metadata type=hosts index=* | eval latestEventSeen=strftime(lastTime,"%x %X") | fields host latestEventSeen

This search will be very fast and doesn't even use the _internal index. However, the forwarder could be connected and just not have any recent data to send - in this report it might look like it wasn't responding. The next search uses _internal to look for a connection from the forwarder even if no user-viewable events are being sent.

index=_internal source=*metrics.log group=tcpin_connections
| eval sourceHost=if(isnull(hostname), sourceHost,hostname) 
| rename connectionType as connectType
| eval connectType=case(fwdType=="uf","univ fwder", fwdType=="lwf", "lightwt fwder",fwdType=="full", "heavy fwder", connectType=="cooked" or connectType=="cookedSSL","Splunk fwder", connectType=="raw" or connectType=="rawSSL","legacy fwder")
| eval version=if(isnull(version),"pre 4.2",version)
| rename version as Ver  arch as MachType
| fields connectType sourceIp sourceHost destPort kb tcp_eps tcp_Kprocessed tcp_KBps splunk_server Ver MachType os
| eval Indexer= splunk_server
| eval Hour=relative_time(_time,"@h")
| stats sum(kb) as total_KB by Hour connectType Ver sourceIp sourceHost MachType os Indexer destPort
| fieldformat Hour=strftime(Hour,"%x %X") | fieldformat total_KB=tostring(total_KB,"commas") | rename os as OS

I like this search and I use variants of it a lot - but it shows only the forwarders that connected during the time range of the search. What if a host used to be reporting but stopped yesterday? The best way to deal with this is to create a lookup table that contains a list of "expected hosts." For example,


The extra fields don't really matter, but they can give you good info... Set this up as a lookup (Add field lookups link)

Now, with the lookup, you can identify missing forwarders:

| inputlookup host_lookup 
| join hostname [ search index=_internal source=*metrics.log group=tcpin_connections
    | stats latest(_time) as mostRecentConnection by hostname 
    | eval mostRecentConnection=strftime(mostRecentConnection,"%x %X") ]
| fillnull "- missing -" mostRecentConnection
| table hostname owner mostRecentConnection

This will report on all forwarders in the lookup table, indicating if they have been heard from during the reporting period. It should run decently fast.


Thanks. Some good info. Still, be nice for Splunk to just provide a heartbeat log, which can be made available to customers (non-internal) and no license cost. I see more and more customers trying to spin up Splunk as a service, and having this data available is important.

0 Karma


If you are using Splunk version 6.2, "Deployment Monitor" App (comes default with installation/upgrade to 6.2) given forwarder information. If you are using pre-6.2 release in your perf environment, then install 6.2 version on your test environment, goto "Deployment Monitor" app and get the search queries populating the forwarder information in the dashboard.

Also there is app called "Data Onboarding" , install and see if you can find any useful information for you. From the dashboard you can get the search query and then you can customize and use it accordingly.

E.g: below query from "Data Onboarding" app dashboard give information about all forwarders

index="_internal" source="metrics.lo" group=tcpin_connections | eval sourceHost=if(isnull(hostname), sourceHost,hostname) | eval connectionType=case(fwdType=="uf","universal forwarder", fwdType=="lwf", "lightweight forwarder",fwdType=="full", "heavy forwarder", connectionType=="cooked" or connectionType=="cookedSSL","Splunk forwarder", connectionType=="raw" or connectionType=="rawSSL","syslog")| eval build=if(isnull(build),"n/a",build) | eval version=if(isnull(version),"pre 4.2",version) | eval guid=if(isnull(guid),sourceHost,guid) | eval os=if(isnull(os),"n/a",os)| eval arch=if(isnull(arch),"n/a",arch) | fields connectionType sourceIp sourceHost sourcePort destPort kb tcp_eps tcp_Kprocessed tcp_KBps splunk_server build version os arch guid


Thanks. These are both useful for admins, but for customers who don't have access to internal indexes, they won't work.

0 Karma


I don't think there is way to know the forwarder details without _internal index.
The work around solution can be
1. As admin user ,schedule a search query on _internal to write forwarder status details to new index (like summary index) and configure the new index accessible for all customers.
2. So the customer can use the new index to know the forwarder details.

let me know if this addresses your queries .

0 Karma


Thanks. Looking at something like this:

index=_internal host=* component=TcpOutputProc "INFO TcpOutputProc - Connected to idx" | stats count by host

It looks like this shows that the connection is made to an indexer. Question is - how often does the "INFO TcpOutputProc - Connected to idx" message appear? If it's a low-volume feed, will this message still appear on a regular basis?

0 Karma


You might check, but I think that you will only see this once each time the forwarder starts.

0 Karma

Revered Legend

Try to put this query and add it to a dashboard OR have them run the query.

| metadata type=hosts index=_internal | search NOT (host="YourSplunkServers" ..OR ....Add more conditions to remove your splunk instances like search head, indexers, deployment servers etc) | table host, recentTime | eval age=now()-recentTime | eval status=if(age<300,"UP","DOWN") | table host, status

Just replace "index=_internal" with "index=*" if no access to _internal indexes (assuming forwarders are sending data frequently)


You can set up an alert for the below query as you'l always have internal logs coming in from the forwarder

index=_internal host=myhost | stats count 

Schedule an alert for the above query and the alert condition where the results = 0, which means that the forwarder is in trouble.


I want them to have the ability to see that it's actually running, without relying on us to set something up. I was hoping for a rest call or something...

0 Karma
Get Updates on the Splunk Community!

Splunk Community Platform Survey

Hey Splunk Community, Starting today, the community platform may prompt you to participate in a survey. The ...

Observability Highlights | November 2022 Newsletter

 November 2022Observability CloudEnd Of Support Extension for SignalFx Smart AgentSplunk is extending the End ...

Avoid Certificate Expiry Issues in Splunk Enterprise with Certificate Assist

This blog post is part 2 of 4 of a series on Splunk Assist. Click the links below to see the other ...