Does anyone have experience indexing an Avro file?
I have Avro data stored in HDFS, but have been unable to find a good way to have Splunk read the binary Avro format without using custom code or other transforms.
Before I go down the custom code path figured I'd ask for other's experiences.
I have tried:
* HadoopConnector - Underlying hadoop fs commands (at least on my CDH4 system) return binary data.
* Flume - Doesn't seem to be able to read in an Avro file source.
* Hue + Splunk w/ http GET - this gets ugly quickly and is super inefficient.
Thanks
wget 'https://archive.apache.org/dist/avro/avro-1.7.5/py/avro-1.7.5.tar.gz'
tar xvf avro-1.7.6.tar.gz
cd avro-1.7.6
sudo python setup.py install
pip install avro
avro cat "/avro_file_path/*.avro" -- format json >"output_file_path/output.json"
Data input >> Files & Directories >> Moniter "output_file_path/output.json"
Sorry for the late reply but this is what I have found...you may already know this:
There are tools that will convert the Avro binary format to text:
http://avro.apache.org/docs/1.4.1/api/java/org/apache/avro/tool/package-summary.html
-- BinaryFragmentToJsonTool
-- ToTextTool
-- DataFileReadTool
The link below is a good blog article discussing this
http://www.michael-noll.com/blog/2013/03/17/reading-and-writing-avro-files-from-the-command-line/
Avro is not the only tool that write to binary format which then needs a converter (or translator) to make it readable in Splunk; some network devices that Splunk monitors act similarly. The point being is that you are not alone.
Hopefully these built in tools will work for you so you don't have to write and support code.