Splunk Enterprise

K8s to splunk: logging certain specific messages to a specific index (via outcoldman collectord)

basti005
New Member

Hey everyone, I am in the situation where I have to provide a solution to a client of mine.

Our application is deployed on their k8s and logs everything to stdout, where they take it and put it into a splunk index, let's call the index "standardIndex".


Due to a change in legislation and a change in how they operate under this legislation change, we need to log specific logs based on the message content (easiest for us..) to a special index we can call "specialIndex".

I managed to rewrite the messages we log, to satisfy their needs in that regard, but now I fail to log those to a separate index.

The collectord annotations I put in our patch look like this, and they seem to work just fine:

 

 

 

spec:
  replicas: 1
  template:
    metadata:
      annotations:
        collectord.io/logs-replace.1-search: '"message":"(?P<message>Error while doing the special thing\.).*?"@timestamp":"(?P<timestamp>[^"]+)"'
        collectord.io/logs-replace.1-val: '${timestamp} message="${message}" applicationid=superImportant status=failed'
        collectord.io/logs-replace.2-search: '"message":"(?P<message>Starting to do the thing\.)".*?"@timestamp":"(?P<timestamp>[^"]+)"'
        collectord.io/logs-replace.2-val: '${timestamp} message="${message}" applicationid=superImportant status=pending'
        collectord.io/logs-replace.3-search: '"message":"(?P<message>Nothing to do but completed the run\.)".*?"@timestamp":"(?P<timestamp>[^"]+)"'
        collectord.io/logs-replace.3-val: '${timestamp} message="${message}" applicationid=superImportant status=successful'
        collectord.io/logs-replace.4-search: '("message":"(?P<message>Deleted \d+ of the thing [^\s]+ where type is [^\s]+ with id)[^"]*").*"@timestamp":"(?P<timestamp>[^"]+)"'
        collectord.io/logs-replace.4-val: '${timestamp} message="${message} <removed>" applicationid=superImportant status=successfull'

 

 

 


My only remaining goal is to send these specific messages to a specific index, and this is where I can't follow the outcold documentation really well. Actually, I am even doubting this is possible but I fail to understand it completely.

Does anyone have a hint?

Labels (1)
0 Karma

datadevops
Path Finder

Hi there,

While Collectord annotations are great for parsing and modifying logs, achieving index routing requires additional configuration. Here's how you can achieve your goal:

1. Utilize Output Plugins:

  • Within your pod configuration, define separate output plugins for standardIndex and specialIndex. This can be done using Fluentd or other log shippers depending on your setup.

2. Leverage Filters:

  • Inside each output plugin, configure filters based on the extracted message content using regular expressions. These filters will determine which logs get routed to each index.

3. Example Configuration:

Here's a simplified example demonstrating the concept:

 

spec:
  containers:
  - name: my-app
    image: my-app-image
    ...
    volumeMounts:
    - name: fluentd-conf
      mountPath: /etc/fluentd/conf.d
    ...
  volumes:
  - name: fluentd-conf
    configMap:
      name: fluentd-config

fluentd-config/my-app.conf:
  source:
    type: tail
    format: json
    tag: app.my-app
  filter:
    # Route logs to standardIndex by default
    - match: **
        type: label_rewrite
        key: index
        value: standardIndex
    # Route logs with specific messages to specialIndex
    - match:
        message: "(Error while doing the special thing)|(Starting to do the thing)|(Nothing to do but completed the run)|(Deleted \d+ of the thing)"
      type: label_rewrite
      key: index
      value: specialIndex
  output:
    # Define separate outputs for each index
    - match:
        index: standardIndex
      type: splunk
      host: splunk_server
      port: 8089
      index: standardIndex
      ...
    - match:
        index: specialIndex
      type: splunk
      host: splunk_server
      port: 8089
      index: specialIndex
      ...

 

 

Remember:

  • Adjust the configuration to match your specific deployment and Splunk setup.
  • Consider using tools like Fluent Bit for more advanced filtering and routing capabilities.
  • Test your configuration thoroughly in a non-production environment before deploying to production.

~ If the reply helps, a Karma upvote would be appreciated

0 Karma
Get Updates on the Splunk Community!

OpenTelemetry for Legacy Apps? Yes, You Can!

This article is a follow-up to my previous article posted on the OpenTelemetry Blog, "Your Critical Legacy App ...

UCC Framework: Discover Developer Toolkit for Building Technology Add-ons

The Next-Gen Toolkit for Splunk Technology Add-on Development The Universal Configuration Console (UCC) ...

.conf25 Community Recap

Hello Splunkers, And just like that, .conf25 is in the books! What an incredible few days — full of learning, ...