Getting Data In

splunk-connect-for-kubernetes breaking on every newline (\n) ?

trsabbot
New Member

Hello, Posting here checks off a huge bucket list for me!

I am hoping what I am sharing is a known, and has a known solution that I have been unable to locate.

We have ~90 different services on AWS EKS clusters, mixed languages, standards (or lack of) and have a need to migrate our current logging solution (log -> cloudwatch -> lambda -> splunk UF -> index cluster) from a cloudwatch based solution to a splunk-connect-for-kubernetes based solution.

The only problem with the existing solution is that using CW is a little pricey and if we can simplify our monitoring while saving money, and reduce the delay to getting logged events into in splunk,  even better.

Everything is working with splunk-connect-for-kubernetes, except for multi-line events (java stacktraces, mssql errors,. etc). Everything we have tried so for to keep these events together as a single multi-line event have failed, with each event getting broken into multiple single event snippets.

We think it might be possible in theory to write service specific fluentd filters, for all 90 services, each one following at least one eventing pattern, but suspect this is not a feasible approach for the long term.

We acknowledge this might make a great use-case to revisit and prioritizing implementing standard logging for all the services, but feel that will be a hard sell because of the need to deliver a working solution sooner than later if possible.

Looking at the boards, I see that multiline is community supported, and the closest relevant issue I found is:
https://github.com/splunk/splunk-connect-for-kubernetes/issues/372
This issue exactly details the problem, but while the last two snippets in the thread are promising, the proposed solution would not work as it seems to depend on a regular character sequence to start every new event 
and this other issue may be related:
https://github.com/splunk/splunk-connect-for-kubernetes/issues/459

We are really hoping for a generic solution that will match the myriad logging patterns in our services, without having to define matching primary and secondary filters for every log structure variation that is currently present in our logs as per this ref:
https://github.com/splunk/splunk-connect-for-kubernetes/issues/255#issuecomment-639915496

Thank you all for getting though my long post!

 

 

 

 

Labels (1)
0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

Data Persistence in the OpenTelemetry Collector

This blog post is part of an ongoing series on OpenTelemetry. What happens if the OpenTelemetry collector ...

Thanks for the Memories! Splunk University, .conf25, and our Community

Thank you to everyone in the Splunk Community who joined us for .conf25, which kicked off with our iconic ...

Introducing Splunk 10.0: Smarter, Faster, and More Powerful Than Ever

Now On Demand Whether you're managing complex deployments or looking to future-proof your data ...