Getting Data In

splunk-connect-for-kubernetes breaking on every newline (\n) ?

trsabbot
New Member

Hello, Posting here checks off a huge bucket list for me!

I am hoping what I am sharing is a known, and has a known solution that I have been unable to locate.

We have ~90 different services on AWS EKS clusters, mixed languages, standards (or lack of) and have a need to migrate our current logging solution (log -> cloudwatch -> lambda -> splunk UF -> index cluster) from a cloudwatch based solution to a splunk-connect-for-kubernetes based solution.

The only problem with the existing solution is that using CW is a little pricey and if we can simplify our monitoring while saving money, and reduce the delay to getting logged events into in splunk,  even better.

Everything is working with splunk-connect-for-kubernetes, except for multi-line events (java stacktraces, mssql errors,. etc). Everything we have tried so for to keep these events together as a single multi-line event have failed, with each event getting broken into multiple single event snippets.

We think it might be possible in theory to write service specific fluentd filters, for all 90 services, each one following at least one eventing pattern, but suspect this is not a feasible approach for the long term.

We acknowledge this might make a great use-case to revisit and prioritizing implementing standard logging for all the services, but feel that will be a hard sell because of the need to deliver a working solution sooner than later if possible.

Looking at the boards, I see that multiline is community supported, and the closest relevant issue I found is:
https://github.com/splunk/splunk-connect-for-kubernetes/issues/372
This issue exactly details the problem, but while the last two snippets in the thread are promising, the proposed solution would not work as it seems to depend on a regular character sequence to start every new event 
and this other issue may be related:
https://github.com/splunk/splunk-connect-for-kubernetes/issues/459

We are really hoping for a generic solution that will match the myriad logging patterns in our services, without having to define matching primary and secondary filters for every log structure variation that is currently present in our logs as per this ref:
https://github.com/splunk/splunk-connect-for-kubernetes/issues/255#issuecomment-639915496

Thank you all for getting though my long post!

 

 

 

 

Labels (1)
0 Karma
Get Updates on the Splunk Community!

Discover Powerful New Features in Splunk Cloud Platform: Enhanced Analytics, ...

Hey Splunky people! We are excited to share the latest updates in Splunk Cloud Platform 9.3.2408. In this ...

Splunk Classroom Chronicles: Training Tales and Testimonials

Welcome to the "Splunk Classroom Chronicles" series, created to help curious, career-minded learners get ...

Access Tokens Page - New & Improved

Splunk Observability Cloud recently launched an improved design for the access tokens page for better ...