Getting Data In

splunk-connect-for-kubernetes breaking on every newline (\n) ?

New Member

Hello, Posting here checks off a huge bucket list for me!

I am hoping what I am sharing is a known, and has a known solution that I have been unable to locate.

We have ~90 different services on AWS EKS clusters, mixed languages, standards (or lack of) and have a need to migrate our current logging solution (log -> cloudwatch -> lambda -> splunk UF -> index cluster) from a cloudwatch based solution to a splunk-connect-for-kubernetes based solution.

The only problem with the existing solution is that using CW is a little pricey and if we can simplify our monitoring while saving money, and reduce the delay to getting logged events into in splunk,  even better.

Everything is working with splunk-connect-for-kubernetes, except for multi-line events (java stacktraces, mssql errors,. etc). Everything we have tried so for to keep these events together as a single multi-line event have failed, with each event getting broken into multiple single event snippets.

We think it might be possible in theory to write service specific fluentd filters, for all 90 services, each one following at least one eventing pattern, but suspect this is not a feasible approach for the long term.

We acknowledge this might make a great use-case to revisit and prioritizing implementing standard logging for all the services, but feel that will be a hard sell because of the need to deliver a working solution sooner than later if possible.

Looking at the boards, I see that multiline is community supported, and the closest relevant issue I found is:
This issue exactly details the problem, but while the last two snippets in the thread are promising, the proposed solution would not work as it seems to depend on a regular character sequence to start every new event 
and this other issue may be related:

We are really hoping for a generic solution that will match the myriad logging patterns in our services, without having to define matching primary and secondary filters for every log structure variation that is currently present in our logs as per this ref:

Thank you all for getting though my long post!





Labels (1)
0 Karma
Get Updates on the Splunk Community!

Build Scalable Security While Moving to Cloud - Guide From Clayton Homes

 Clayton Homes faced the increased challenge of strengthening their security posture as they went through ...

Mission Control | Explore the latest release of Splunk Mission Control (2.3)

We’re happy to announce the release of Mission Control 2.3 which includes several new and exciting features ...

Cloud Platform | Migrating your Splunk Cloud deployment to Python 3.7

Python 2.7, the last release of Python 2, reached End of Life back on January 1, 2020. As part of our larger ...