Getting Data In

splunk-connect-for-kubernetes breaking on every newline (\n) ?

trsabbot
New Member

Hello, Posting here checks off a huge bucket list for me!

I am hoping what I am sharing is a known, and has a known solution that I have been unable to locate.

We have ~90 different services on AWS EKS clusters, mixed languages, standards (or lack of) and have a need to migrate our current logging solution (log -> cloudwatch -> lambda -> splunk UF -> index cluster) from a cloudwatch based solution to a splunk-connect-for-kubernetes based solution.

The only problem with the existing solution is that using CW is a little pricey and if we can simplify our monitoring while saving money, and reduce the delay to getting logged events into in splunk,  even better.

Everything is working with splunk-connect-for-kubernetes, except for multi-line events (java stacktraces, mssql errors,. etc). Everything we have tried so for to keep these events together as a single multi-line event have failed, with each event getting broken into multiple single event snippets.

We think it might be possible in theory to write service specific fluentd filters, for all 90 services, each one following at least one eventing pattern, but suspect this is not a feasible approach for the long term.

We acknowledge this might make a great use-case to revisit and prioritizing implementing standard logging for all the services, but feel that will be a hard sell because of the need to deliver a working solution sooner than later if possible.

Looking at the boards, I see that multiline is community supported, and the closest relevant issue I found is:
https://github.com/splunk/splunk-connect-for-kubernetes/issues/372
This issue exactly details the problem, but while the last two snippets in the thread are promising, the proposed solution would not work as it seems to depend on a regular character sequence to start every new event 
and this other issue may be related:
https://github.com/splunk/splunk-connect-for-kubernetes/issues/459

We are really hoping for a generic solution that will match the myriad logging patterns in our services, without having to define matching primary and secondary filters for every log structure variation that is currently present in our logs as per this ref:
https://github.com/splunk/splunk-connect-for-kubernetes/issues/255#issuecomment-639915496

Thank you all for getting though my long post!

 

 

 

 

Labels (1)
0 Karma
Got questions? Get answers!

Join the Splunk Community Slack to learn, troubleshoot, and make connections with fellow Splunk practitioners in real time!

Meet up IRL or virtually!

Join Splunk User Groups to connect and learn in-person by region or remotely by topic or industry.

Get Updates on the Splunk Community!

[Puzzles] Solve, Learn, Repeat: Character substitutions with Regular Expressions

This challenge was first posted on Slack #puzzles channelFor BORE at .conf23, we had a puzzle question which ...

Splunk Community Badges!

  Hey everyone! Ready to earn some serious bragging rights in the community? Along with our existing badges ...

[Puzzles] Solve, Learn, Repeat: Matching cron expressions

This puzzle (first published here) is based on matching timestamps to cron expressions.All the timestamps ...