All Apps and Add-ons

Heavy Forwarder Thruput

ephemeric
Contributor

Greetz,

When using the SoS app along with forwarded _internal indexes from heavy forwarders I get no results under S.o.S - Splunk on Splunk > Indexing Performance for "Estimated indexing rate" and "Fill ratio of data processing queues".

Upon inspecting the search I see we get no "group=per_sourcetype_thruput" in the metrics.log.

This metric is however on our indexers but then that's not the thruput we want to see.

We are "indexing" in memory on our heavy forwarders in order to save bandwidth by discarding

events at the collector.

We would like to see if any events are being dropped on the collector or queues blocked etc. Reason being we have several inputs from a 100Mbit LAN into the collector and outputs via 2Mbit WAN link upstream to our three indexers.

Is it possible to get this from SoS in this configuration?

Thank you.

hexx
Splunk Employee
Splunk Employee

The lack of per_*_thruput metrics on heavy-weight forwarders is a core Splunk bug which will be fixed in a future release - SPL-68318.

0 Karma

ephemeric
Contributor

Thank you, that's what I was looking for. Confirmed.

0 Karma

hexx
Splunk Employee
Splunk Employee

No, this is because heavy-weight forwarders do not record per_*_thruput metrics. You should be able to see events recording the queue sizes, though.

0 Karma

ephemeric
Contributor

Yes and yes.

I get some search results like "Estimated percentage of total CPU used per Splunk processor".

"per_sourcetype_thruput" is not found even in the metrics.log on the heavy forwarder.

I'm thinking reason being is that indexing only happens in memory on the forwarder and written to disk on the receiver indexer?

0 Karma

hexx
Splunk Employee
Splunk Employee

Are you sure that you are forwarding the _internal events from your HWF to your indexers? Also, are you sure that you added your forwarder to the splunk_servers_cache.csv lookup file under the right hostname?

0 Karma

MuS
SplunkTrust
SplunkTrust

Hi ephemeric

check the metrics.log of your heavy forwarder for something like tcpoutput or tcp-output-generic-processor this is where your data gets sent to the indexer. This is happening in the indexQueue, so you would see troubles or blocks on the indexQueue if your WAN link could not handle the traffic.

hope this helps

cheers,

MuS

ephemeric
Contributor

Thank you lemme check...

0 Karma

ephemeric
Contributor

Or even any tips and advice on how to get metrics in this scenario type setup with several "high speed" LAN inputs being forwarded via slow WAN uplinks through a heavy forwarding Splunk instance.

0 Karma
Got questions? Get answers!

Join the Splunk Community Slack to learn, troubleshoot, and make connections with fellow Splunk practitioners in real time!

Meet up IRL or virtually!

Join Splunk User Groups to connect and learn in-person by region or remotely by topic or industry.

Get Updates on the Splunk Community!

Build the Future of Agentic AI: Join the Splunk Agentic Ops Hackathon

AI is changing how teams investigate incidents, detect threats, automate workflows, and build intelligent ...

[Puzzles] Solve, Learn, Repeat: Character substitutions with Regular Expressions

This challenge was first posted on Slack #puzzles channelFor BORE at .conf23, we had a puzzle question which ...

Splunk Community Badges!

  Hey everyone! Ready to earn some serious bragging rights in the community? Along with our existing badges ...