Getting Data In
Highlighted

What are the best searches to monitor data flow activity from the Universal Forwarder to the Heavy Forwarder to the indexer?

Path Finder

Hi ,
i would like to monitor the Splunk data flow activity. what are the best Splunk searches to monitor the data sending from UF (Universal Forwarder) moving to HF (Heavy Forwarder) and HF to indexer?

0 Karma
Highlighted

Re: What are the best searches to monitor data flow activity from the Universal Forwarder to the Heavy Forwarder to the indexer?

SplunkTrust
SplunkTrust

what is it exactly that you would like to monitor?
there are some pre-built searches on the Monitoring Console (or DMC if you are pre 6.5)

Highlighted

Re: What are the best searches to monitor data flow activity from the Universal Forwarder to the Heavy Forwarder to the indexer?

Path Finder

Hi,
I would like to monitor , how much data did UF forwarding to HF and how much data indexed by splunk from HF. Blocking information and duplicate data , forwarding stopped UF and , data indexed by each indexer

0 Karma
Highlighted

Re: What are the best searches to monitor data flow activity from the Universal Forwarder to the Heavy Forwarder to the indexer?

SplunkTrust
SplunkTrust

lets start from the bottom up:
- data indexed by each indexer: you can navigate to settings -> MC (or DMC) -> indexing -> License Usage Previous 30 -> choose the dropdown to split by indexer or any split you would like
here is a simple search that will help with other items as well but lets use it for split by Indexer perspective:

earliest=-1d@d latest=@d  index=_internal source=*license_usage.log* type=Usage
| stats sum(b) AS Bytes by splunk_server
| eval GB = Bytes/1024/1024/1024
| table splunk_server GB
| sort -GB
| addcoltotals
  • UF stopped sending data: there are many ways to check that. this answer elaborates: https://answers.splunk.com/answers/798/how-do-i-tell-if-a-forwarder-is-down.html also in the MC (or DMC) you can click on Forwarders -> Deployment and check the status (up or Missing) also you can tell which forwarder is UF and which HF. Or you can use the | metadata command and alert on machines that did not send data in over a set amount of time you determine:

    | metadata index=main type=hosts | eval age = now()-lastTime | where age > (2*86400) | sort age d | convert ctime(lastTime) | fields age,host,lastTime

the age is in seconds so here its 2 X 86400 which equals to 48 hours

  • not sure what exactly you mean by "blocking information" but in the MC (or DMC) navigate to indexing -> Indexing Performance: Deployment. there you can see indexing rate of and split by sourcetype, source, host, as well as the indexing pipe and queues.

  • duplicate data: great answer here: https://answers.splunk.com/answers/432/how-do-i-find-all-duplicate-events.html there are more answers on this portal.

  • how much data the Heavy Forwarder indexed - again MC (or DMC) and you can check by the HF name

  • the amount of data UF passed to the HF is (supposed to be) equal to the amount of data the HF sent to splunk. same as mentioned before split by host or other field you would like to see results. you can also always open MC panels in search and tweak to your satisfaction.

take a look at the search i provided above, run it in verbose mode and you will discover interesting fields you can split by:
h = host
s = source
st = sourcetype
idx = index

hope it helps

View solution in original post