Your title and question are confusing. What do you want to find out? The name of the HF that was involved in processing a certain event, or the name of the original source host from which the event originated?
What kind of data feed are we talking about here? How are they sending through the HF? What kind of Splunk input is used? In case you're interested in the original source host: what does the data look like, does that contain the original host name?
I highly doubt that works great, since the per_x_thruput metrics logs are incomplete. As the docs specify:
Note: The per_x_thruput categories are not complete. Remember that by default metrics.log shows the 10 busiest of each type, for each sampling window. If you have 2000 active forwarders, you cannot expect to see the majority of them in this data. You can adjust the sampling quantity, but this will increase the chattiness of metrics.log and the resulting indexing load and _internal index size. The sampling quantity is adjustable in limits.conf, [metrics] maxseries = num.
As I mentioned in my original comment to your question: that highly depends on how the data is coming in.
One thing you can do to allow filtering for data coming through a certain HF using file monitor inputs is to put the log files that splunk reads in a folder that is named after the HF's hostname. That way, the HF's name shows up in the source field.
Another solution is to add a custom meta data field, to explicitly label each event with the HF it passed through.
It doesn't all have to be in one folder. Just have the HF name somewhere in the path. For example, say currently you have:
Just create something like /opt/hf.dmz.com/logs/ as a symbolic link to /opt/logs/ and update splunk inputs accordingly. Which results in source values like: