I would like to know how to transfer data received from UniversalForwarder from HeavyForwarder to SplunkCloud. I would like to have a configuration like UF -> HF -> SplunkCloud.
I don't know about data transfer between UF and HF, but I don't know how to transfer that data to SplunkCloud.
To forward data from the UFs to the HFs, put the name or address of the HFs in an outputs.conf file on each UF. See https://docs.splunk.com/Documentation/Forwarder/9.1.0/Forwarder/Configuretheuniversalforwarder for more information. (Replace "Indexer" with "Heavy Forwarder" when reading the instructions.)
For sending from HF to Splunk Cloud, there is an app you must download from your Splunk Cloud search head. Go to the "Universal Forwarder" app and click the green download button. Install the downloaded app on the HFs. Despite the name, the app can be used on either UFs or HFs.
Note, for better resiliency, consider having multiple HFs with each UF load balancing among them.
To forward data from the UFs to the HFs, put the name or address of the HFs in an outputs.conf file on each UF. See https://docs.splunk.com/Documentation/Forwarder/9.1.0/Forwarder/Configuretheuniversalforwarder for more information. (Replace "Indexer" with "Heavy Forwarder" when reading the instructions.)
For sending from HF to Splunk Cloud, there is an app you must download from your Splunk Cloud search head. Go to the "Universal Forwarder" app and click the green download button. Install the downloaded app on the HFs. Despite the name, the app can be used on either UFs or HFs.
Note, for better resiliency, consider having multiple HFs with each UF load balancing among them.
No changes need to be made to the data. Just configure the HF as described earlier. there is an app you must download from your Splunk Cloud search head. Go to the "Universal Forwarder" app and click the green download button. Install the downloaded app on the HFs. Despite the name, the app can be used on either UFs or HFs.
It is also worth noting that HFs parse data and forward it as parsed to indexers whereas UF forward so called "cooked" data. It means that:
1) You will see much more network bandwidth usage when pushing from HF
2) Since HFs parse events and the events are _not_ parsed again on indexers (the only operation possible on indexers in that setup is Ingest Action) you'd need to do all parsing on HFs which means you'd need to distribute all your relevant addons to the HF.
If you need an intermediate forwarder but don't require parsing before sending the events to cloud, you can install an intermediate UF receiving data from several splunktp sources and forwarding them to the cloud.