Reporting

Export data to Parquet (Hadoop - Cloudera Stack) | Scheduled job

astone42
Engager

We have a Hadoop cluster that's based on the Cloudera Stack (CDH 5.8.3) and we are using parquet file format to store the data.
We want to export processed data from Splunk directly to the parquet tables in the Hadoop Cluster.

Example, let's assume a table named user_sessions exists in the Hadoop cluster stored in parquet.
1. User sessions log files are pushed to splunk
2. Scheduled Splunk query process the log files and outputs them in a table format
3. The data from step 2 is appended to the user_sessions table in the Hadoop cluster.

A possible solution for step 3 is to create a splunk custom command that connects to Impala through pyodbc and writes the data using INSERT INTO. The bottleneck for that solution is the performance.

Any ideas/suggestions?

Thanks a lot in advance.

0 Karma

rdagan_splunk
Splunk Employee
Splunk Employee
Get Updates on the Splunk Community!

Accelerating Observability as Code with the Splunk AI Assistant

We’ve seen in previous posts what Observability as Code (OaC) is and how it’s now essential for managing ...

Integrating Splunk Search API and Quarto to Create Reproducible Investigation ...

 Splunk is More Than Just the Web Console For Digital Forensics and Incident Response (DFIR) practitioners, ...

Congratulations to the 2025-2026 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...