By "forwarding to Hunk" I assume you mean placing the data on HDFS (or another Hadoop-compatible file system) to be searched via a virtual index?
The most common workflow is to add the log files to a regular Splunk index (using any input method, such as a Splunk forwarder), and set up that index to archive to HDFS. This means you can use a Splunk-managed index to get fast performance on the most recent data, and you can use Hadoop (via Hunk) to search the much larger pool of older data. You can find more information about getting data into Splunk indexes here:
http://docs.splunk.com/Documentation/Splunk/6.4.3/Data/Getstartedwithgettingdatain
You can find documentation about archiving here:
https://docs.splunk.com/Documentation/Hunk/6.4.3/Hunk/ArchivingSplunkindexes
If you want to copy the data directly to HDFS without first adding it to a regular Splunk index, you cannot currently do this via a Splunk forwarder. There are a number of third party tools that can be used to do this, e.g. Apache Flume (https://flume.apache.org/).
... View more