Getting Data In

Splunk Connect for Kubernetes - what are the fluentd:monitor-agent logs?

AHBrook
Path Finder

Hey everyone!

I've successfully set up a link from Splunk Connect for Kubernetes on our OpenShift environment. It outputs to a local Heavy forwarder, which then splits the data stream and sends to our on-prem Splunk instance and a proof of concept Splunk Cloud instance (which we're hopefully going to be moving towards in the future).

I have the system setup so that it sends most of its logs to an index called "test_ocp_logs". This covers cases in the format of [ocp:container:ContainerName].

However, I am getting a strange log into our root "test" index, which I have set up as the baseline default in the configuration.  These have the following info:

  • source = namespace:splunkconnect/pod:splunkconnect-splunk-kubernetes-logging-XXXXX
  • sourcetype = fluentd:monitor-agent

These look like some kind of report on what the SCK system grabbed and processed, but I can't seem to find any kind of definition anywhere.

Here's what one of the events looks like :

 

 { [-]
   emit_records: 278304
   emit_size: 0
   output_plugin: false
   plugin_category: filter
   plugin_id: object:c760
   retry_count: null
   type: jq_transformer
} 

 

 

So I have a few main questions:

  1. What is this log, and is it something we should care about?
  2. If we should care about this, what do the fields mean?
  3. If we should care about this, how do I direct where it goes so that I keep all my SCK/OpenShift events kept in the same index (at least for now)?

For reference, this is the contents of my values.yaml for the helm chart to build SCK:

 

global:
  logLevel: info
  splunk:
    hec:
      host: REDACTED
      port: 8088
      token: REDACTED
      protocol:
      indexName: test
      insecureSSL: true
      clientCert:
      clientKey:
      caFile:
      indexRouting:
  kubernetes:
    clusterName: "paas02-t"
  prometheus_enabled:
  monitoring_agent_enabled:
  monitoring_agent_index_name:
  serviceMonitor:
    enabled: false

    metricsPort: 24231
    interval: ""
    scrapeTimeout: "10s"

    additionalLabels: { }

splunk-kubernetes-logging:
  enabled: true
  logLevel:
  fluentd:
    # Resticting to APP logs only for the proof of concept
    path: /var/log/containers/*APP*.log
    exclude_path:
      - /var/log/containers/kube-svc-redirect*.log
      - /var/log/containers/tiller*.log
      - /var/log/containers/*_kube-system_*.log
      # ignoring internal Openshift Logging generated errors
      - /var/log/containers/*_openshift-logging_*.log

  containers:
    path: /var/log
    pathDest: /var/lib/docker/containers
    logFormatType: cri
    logFormat: "%Y-%m-%dT%H:%M:%S.%N%:z"
    refreshInterval:

  k8sMetadata:
    podLabels:
      - app
      - k8s-app
      - release
    watch: true
    cache_ttl: 3600

  sourcetypePrefix: "ocp"
  
  rbac:
    create: true
    openshiftPrivilegedSccBinding: true

  serviceAccount:
    create: true
    name: splunkconnect

  podSecurityPolicy:
    create: false
    apparmor_security: true
  splunk:
    hec:
      host:
      port:
      token:
      protocol:
       indexName: test_ocp_logs
      insecureSSL:
      clientCert:
      clientKey:
      caFile:

  journalLogPath: /run/log/journal
  charEncodingUtf8: false

  logs:
    docker:
      from:
        journald:
          unit: docker.service
      timestampExtraction:
        regexp: time="(?<time>\d{4}-\d{2}-\d{2}T[0-2]\d:[0-5]\d:[0-5]\d.\d{9}Z)"
        format: "%Y-%m-%dT%H:%M:%S.%NZ"
      sourcetype: kube:docker
    kubelet: &glog
      from:
        journald:
          unit: kubelet.service
      timestampExtraction:
        regexp: \w(?<time>[0-1]\d[0-3]\d [^\s]*)
        format: "%m%d %H:%M:%S.%N"
      multiline:
        firstline: /^\w[0-1]\d[0-3]\d/
      sourcetype: kube:kubelet
    etcd:
      from:
        pod: etcd-server
        container: etcd-container
      timestampExtraction:
        regexp: (?<time>\d{4}-\d{2}-\d{2} [0-2]\d:[0-5]\d:[0-5]\d\.\d{6})
        format: "%Y-%m-%d %H:%M:%S.%N"
    etcd-minikube:
      from:
        pod: etcd-minikube
        container: etcd
      timestampExtraction:
        regexp: (?<time>\d{4}-\d{2}-\d{2} [0-2]\d:[0-5]\d:[0-5]\d\.\d{6})
        format: "%Y-%m-%d %H:%M:%S.%N"
    etcd-events:
      from:
        pod: etcd-server-events
        container: etcd-container
      timestampExtraction:
        regexp: (?<time>\d{4}-[0-1]\d-[0-3]\d [0-2]\d:[0-5]\d:[0-5]\d\.\d{6})
        format: "%Y-%m-%d %H:%M:%S.%N"
    kube-apiserver:
      <<: *glog
      from:
        pod: kube-apiserver
      sourcetype: kube:kube-apiserver
    kube-scheduler:
      <<: *glog
      from:
        pod: kube-scheduler
      sourcetype: kube:kube-scheduler
    kube-controller-manager:
      <<: *glog
      from:
        pod: kube-controller-manager
      sourcetype: kube:kube-controller-manager
    kube-proxy:
      <<: *glog
      from:
        pod: kube-proxy
      sourcetype: kube:kube-proxy
    kubedns:
      <<: *glog
      from:
        pod: kube-dns
      sourcetype: kube:kubedns
    dnsmasq:
      <<: *glog
      from:
        pod: kube-dns
      sourcetype: kube:dnsmasq
    dns-sidecar:
      <<: *glog
      from:
        pod: kube-dns
        container: sidecar
      sourcetype: kube:kubedns-sidecar
    dns-controller:
      <<: *glog
      from:
        pod: dns-controller
      sourcetype: kube:dns-controller
    kube-dns-autoscaler:
      <<: *glog
      from:
        pod: kube-dns-autoscaler
        container: autoscaler
      sourcetype: kube:kube-dns-autoscaler
    kube-audit:
      from:
        file:
          path: /var/log/kube-apiserver/audit.log
      timestampExtraction:
        format: "%Y-%m-%dT%H:%M:%SZ"
      sourcetype: kube:apiserver-audit
    openshift-audit:
      from:
        file:
          path: /var/log/openshift-apiserver/audit.log
      timestampExtraction:
        format: "%Y-%m-%dT%H:%M:%SZ"
      sourcetype: kube:openshift-apiserver-audit
    oauth-audit:
      from:
        file:
          path: /var/log/oauth-apiserver/audit.log
      timestampExtraction:
        format: "%Y-%m-%dT%H:%M:%SZ"
      sourcetype: kube:oauth-apiserver-audit

  resources:
    requests:
      cpu: 100m
      memory: 200Mi

  buffer:
    "@type": memory
    total_limit_size: 600m
    chunk_limit_size: 20m
    chunk_limit_records: 100000
    flush_interval: 5s
    flush_thread_count: 1
    overflow_action: block
    retry_max_times: 5
    retry_type: periodic

  sendAllMetadata: false

  nodeSelector:
    node-role.kubernetes.io/app: ''

  affinity: {}

  extraVolumes: []
  extraVolumeMounts: []

  priorityClassName:

  kubernetes:
    securityContext: true

splunk-kubernetes-objects:
  enabled: false

splunk-kubernetes-metrics:
  enabled: false

 

Tags (1)

hofbr
Engager

This is exactly my question as well.

0 Karma

AHBrook
Path Finder

I should note, I was able to turn off the logs by updating the "monitoring_agent_enabled" value to false in the helm chart values.yaml. Interestingly, leaving the field blank will default to "True," despite not listing that anywhere in the reference docs I could find.

However, this does not explain what the monitoring agent is or what it collects, and how that data is used. Given that it is grouped with prometheus, I suspect it is assisting with the metric display, but.. again, no definitions that I could find.

hofbr
Engager

Yes I'm still lost as to where they come from and if they're stored somewhere as log files also. They're not stored with the container logs.

It looks like it's related to this monitor agent. On the configMap.yaml I can see this command:

(sorry for formatting... insert code/edit button doesn't seem to work... and no Markdown option?)

{{- if .Values.global.monitoring_agent_enabled }}
# = filters for monitor agent =
<filter monitor_agent>
@type jq_transformer
jq ".record.source = \"namespace:#{ENV['MY_NAMESPACE']}/pod:#{ENV['MY_POD_NAME']}\" | .record.sourcetype = \"fluentd:monitor-agent\" | .record.cluster_name = \"{{ or .Values.kubernetes.clusterName .Values.global.kubernetes.clusterName | default "cluster_name" }}\" | .record.splunk_index = \"{{ or .Values.global.monitoring_agent_index_name .Values.global.splunk.hec.indexName .Values.splunk.hec.indexName | default "main" }}\" {{- if .Values.customMetadata }}{{- range .Values.customMetadata }}| .record.{{ .name }} = \"{{ .value }}\" {{- end }}{{- end }} | .record"
</filter>

 

0 Karma
Get Updates on the Splunk Community!

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...

Cloud Platform & Enterprise: Classic Dashboard Export Feature Deprecation

As of Splunk Cloud Platform 9.3.2408 and Splunk Enterprise 9.4, classic dashboard export features are now ...