Getting Data In

Hunk bucket archive question?

tsunamii
Path Finder

When HUNK does its bucket pushes to HDFS, it also pushes a couple small supporting files, metadata, etc... With Hadoop's issues handling small files, I was wondering if that is something that's been looked at or not?

For using the HUNK archiving on clustered indexers, I understand that if the buckets have 2 or greater searchable copies of the data, searches against that archived data will return duplicated results, is that correct?

0 Karma

hsesterhenn
Path Finder

Hi

1) jepp. There will be some overhead. Since there are not millions of buckets I don't think this will be an issue...

2) Every indexer will try to copy "it's own" bucket to HDFS. If there is already a valid copy of this bucket the indexer will skip this.
There should be no duplicates.

HTH,

Holger

0 Karma
Get Updates on the Splunk Community!

App Platform's 2025 Year in Review: A Year of Innovation, Growth, and Community

As we step into 2026, it’s the perfect moment to reflect on what an extraordinary year 2025 was for the Splunk ...

Operationalizing Entity Risk Score with Enterprise Security 8.3+

Overview Enterprise Security 8.3 introduces a powerful new feature called “Entity Risk Scoring” (ERS) for ...

Unlock Database Monitoring with Splunk Observability Cloud

  In today’s fast-paced digital landscape, even minor database slowdowns can disrupt user experiences and ...