Hello Splunk community!
I’m planning a project that involves sending event data from Splunk to Elasticsearch using Kafka, and I haven’t started implementing it yet.
I’m looking for your guidance nd assistance to implement this solution efficiently and securely.
I would greatly appreciate any advice, recommendations, or ready-to-use tools to help with:
Extracting data from a Splunk index.
Sending the data to Kafka as a temporary transport layer.
Formatting events in JSON with a consistent schema.
Ingesting data into Elasticsearch with proper index settings.
Any experience, suggestions, or guidance from the community would be highly appreciated
OK. What have you tried so far and where did you get stuck?
Hi @kn450
Just to confirm, are you looking to send data to ElasticSearch from Splunk as its received? ie Not historic indexed Splunk data?
I think the only supported way to really send your incoming data out from Splunk to Kafka is to use a tcpout within your outputs.conf with sendCookedData=false and setup an input on the Kafka end to receive this unparsed Splunk data.
In terms of parsing the data correctly in Kafka, this probably isnt something you'll be best getting help from within this community.
If you need to move existing indexed data then this is a different issue - this depends on how much data you need to move - whats the rough sizing of this data? Do you need it to remain in Splunk afterwards?
🌟 Did this answer help you? If so, please consider:
Your feedback encourages the volunteers in this community to continue contributing
I hope you’re all doing well. I am currently working on a project involving exporting large volumes of data from Splunk to Elasticsearch / OpenSearch, and I would like to tap into your expertise and assistance on this matter. Project background: We’re using Splunk as our repository and analytical tool for security and operational data. Due to the license size constraints in Splunk, there’s a need to export this data to Elasticsearch, where scalability and storage capabilities are better. The goal is to export the data with the exact same field names as used in Splunk without changing or losing any field names so that we can continue our queries and analysis using the same structure we have in Splunk. If you like, I can send you a shorter version or a version tailored for a specific forum or audience.
As already asked we need more information to your needs and especially your business case!
Yes, it's possible to use kafka as an output target in Splunk HF. But why you want to use it and is there any other way to achieve Splunk -> ElasticSearch which is cheaper and has less moving parts and follow KISS principle?
Then if not, which kafka installation you are using? Some open source, Confluent, Aiven or something else.
And are you sending a new events which you also ingesting into splunk or are you ingesting those with UF/HF and only target is ES via Kafka. Or are you getting some events for Splunk which have already indexed?
And your environment: OS, Splunk, etc.