Getting Data In

How archiving splunk logs to hdfs work

New Member

I know that Hunk issues the archivebuckets command, which will start the archiving process on each indexer.
What is the archiving process?

Do all indexer machines need access to hadoop (hdfs)?
Does hunk copy the files locally and than to hdfs?

please explain exactly what happens in this process.

Tags (2)
0 Karma

Splunk Employee
Splunk Employee

Do all indexer machines need access to hadoop (hdfs)?
Yes, they need network access, Java and Hadoop libraries (ie you should be able to successfully run hadoop fs -copyFromLocal .... from every indexer)

Does hunk copy the files locally and than to hdfs?
No, the buckets are copied from indexers directly to HDFS

0 Karma

Splunk Employee
Splunk Employee

Archiving indexers from Splunk to HDFS require the bucket to be in the Warm or Cold stage. Therefore, we can either wait a day, or restart Hunk.
The Splunk_Archiver (New App with Hunk 6.2.1) using the Bundle Replication is distributed to all the Indexers.   
Every 60 Minutes, the App on each Indexer executes the search = | archivebuckets This command will Triggers bucket copying From the Indexer to HDFS (Similar to Hadoop fs –put BUCKET /HDFS)

0 Karma

Influencer

The announcement blog indicates all indexers need Java and Hadoop clients so I would guess that they're talking directly to HDFS.

0 Karma

New Member

Which ports does Hunk use to connect to the splunk indexers?

What Splunk version need to be installed on the indexer machines to support archiving indexes?
What else needs to be installed on the Splunk indexer machines?

0 Karma

Splunk Employee
Splunk Employee

To connect to indexers, the same Splunk Enterprise ports are needed (8089 by default). For Hadoop, ports depend on the Hadoop distro - indexers need access to the Secondary/Namenode (usually port 8020), and DataNode transfer ports (usually port 50010). Indexer Splunk version doesn't matter, but you need Java and Hadoop libraries on the indexers.

0 Karma