Reporting

Export data to Parquet (Hadoop - Cloudera Stack) | Scheduled job

astone42
Engager

We have a Hadoop cluster that's based on the Cloudera Stack (CDH 5.8.3) and we are using parquet file format to store the data.
We want to export processed data from Splunk directly to the parquet tables in the Hadoop Cluster.

Example, let's assume a table named user_sessions exists in the Hadoop cluster stored in parquet.
1. User sessions log files are pushed to splunk
2. Scheduled Splunk query process the log files and outputs them in a table format
3. The data from step 2 is appended to the user_sessions table in the Hadoop cluster.

A possible solution for step 3 is to create a splunk custom command that connects to Impala through pyodbc and writes the data using INSERT INTO. The bottleneck for that solution is the performance.

Any ideas/suggestions?

Thanks a lot in advance.

0 Karma

rdagan_splunk
Splunk Employee
Splunk Employee
Get Updates on the Splunk Community!

Introducing the Splunk Community Dashboard Challenge!

Welcome to Splunk Community Dashboard Challenge! This is your chance to showcase your skills in creating ...

Built-in Service Level Objectives Management to Bridge the Gap Between Service & ...

Wednesday, May 29, 2024  |  11AM PST / 2PM ESTRegister now and join us to learn more about how you can ...

Get Your Exclusive Splunk Certified Cybersecurity Defense Engineer Certification at ...

We’re excited to announce a new Splunk certification exam being released at .conf24! If you’re headed to Vegas ...