All Apps and Add-ons

Is there a way to replace fluentd in Rancher with Splunk or replace the built in system fluentd to handle multilines?

ifeldshteyn
Communicator

Hello,

We are using Rancher and are able to successfully send logs to Splunk. On the rancher end, fluentd is the main monitor of logs.

The docker container is monitored by fluentd which, after adding some metadata, sends it out to Splunk HEC.

Fluentd reads docker container's stream, timestamp and log in JSON format. The issue with fluentd is that it does not process multiline logs at all so every java stacktrace generates 100 events which are then sent to Splunk.

Is there a way to either replace fluentd completely in Rancher with Splunk (the built-in Splunk forwarder is used AFTER fluentd has its way with the log payloads) or replace the built-in system fluentd to handle multilines?

Thanks,
Ilya

Labels (1)

paulcgt
Explorer

Hi @ifeldshteyn, did you ever manage to find a solution to this? I'd really like for fluentd to handle multiline events in Rancher. The way it's working now, an event per line, is rather painful.

0 Karma

ifeldshteyn
Communicator

Hi,

Yes. This was an absolute pain. The trick is to tell fluentd to ONLY split events if they hit a particular regex marker. And then spool everything after that marker into a single event. The fluentd library you must use is called fluent-plugin-concat with the field multiline_start_regexp.

Here is what worked for me.

Just add this to your filter.conf and then include the filter in fluent.conf like this @include /fluentd/etc/filter.conf

The code is below. 

<filter "**">
@type concat
key log
separator ""
multiline_start_regexp /^(\[|time=)?\d{4}.\d{2}.\d{2}|^URL: http/
stream_identity_key tag
</filter>

So a few explanations....

In this case the filter will concatenate all data into a single event, hence the concat. The data it will be looking it is the log value.

There is no need for custom separators between multilines (not even EOLs - since they are baked into the log message). The multilin_start_regexp is the key thing. Don't mess this up as it describes exactly how to separate the multilines into a single event. So if your multiline message is separated by HELLOWORLD then your multiline_start_regexp would be /^HELLOWORLD/. In our case it is just a timestamp. 

The stream_identity_tag is super important. If you have multiple loggers writing into the SAME logfile you have to separate them, otherwise you will get word soup. It would concat lines from different applications. Fortunately, you can split them with tag which is the unique identifier per stream.

Hope this helps. Please read and understand this plugin here --> https://github.com/fluent-plugins-nursery/fluent-plugin-concat

 

Thanks,

Ilya 

paulcgt
Explorer

Thanks for the explanation, links, and examples, Ilya!

If I take a look at the events we're recording across various containers, it's clear that they don't use a common log format. Some dates are formatted 2022-07-13T19:55:23.884Z where others are 2022/07/13 19:55, and yet others are 13 July 2022, 19:55. Some containers (perhaps HAProxy or NGINX) start with the client's IP address, followed by a date in square braces. Other log files I've found simply have no date!

We have a number of dev teams deploying to the cluster, and it's impossible at this stage to make them to use the same log format so that the regex would always match.

Have you standardized your logging format across containers - or am I missing something here? Please tell me that I'm missing something. 😉

Tags (1)
0 Karma

ifeldshteyn
Communicator

It's nearly impossible, especially for 3rd party services, to have standard log formats. The trick is to join the split regex with OR  ( | symbol ) for the various formats. I prefer standard syslog logging myself. 

0 Karma

paulcgt
Explorer

Yeah, that does make it hard because I don't have a lot of control and there's quite a variety of logging formats. That said, perhaps the 80/20 rule just has to apply here. Thank you for your help!

0 Karma
Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...