Dashboards & Visualizations

how to parse Events in splunk for more useful dashboard panels

karthiklen
Explorer

Currently it's difficult to parse out the details of Cluster events in Splunk, to enable more useful Dashboard panels. Looking for suggestions to figure out a way to extract from the splunk event.go events, the columns that we would see when we run "oc get events" on a cluster; namespace, last seen, type, reason, object, message.

Once we can extract those fields and make available as variables for splunk stats/tables/timechart, we can put some useful panels together to gauge plant health.

  • Realtime views around created/started containers/pod and failures
  • Realtime views around job start/failure/complete
  • Realtime views into failed mounts and types of failures
  • Realtime views on image pulls, success, backoffs, failures, denies

Appreciate the help with any docs/leads and high level ideas to achieve this please.

Sample Events:

Time Event

 12/30/21
1:59:07.000 AM
 
<135>Dec 30 06:59:07 9000n2.nodes.com kubernetes.var.log.containers.ku: namespace_name=openshift-kube-controller-manager, container_name=kube-controller-manager, pod_name=kube-controller-manager-9000n2.nodes.com, message=I1230 06:58:56.139184 1 event.go:291] "Event occurred" object="openshift-logging/elasticsearch-im-infra" kind="CronJob" apiVersion="batch/v1beta1" type="Warning" reason="FailedNeedsStart" message="Cannot determine if job needs to be started: too many missed start time (> 100). Set or decrease .spec.startingDeadlineSeconds or check clock skew"
 12/30/21
1:59:07.000 AM
 
<135>Dec 30 06:59:07 9000n2.nodes.com kubernetes.var.log.containers.ku: namespace_name=openshift-kube-controller-manager, container_name=kube-controller-manager, pod_name=kube-controller-manager-9000n2.nodes.com, message=I1230 06:58:56.133312 1 event.go:291] "Event occurred" object="openshift-logging/elasticsearch-im-audit" kind="CronJob" apiVersion="batch/v1beta1" type="Warning" reason="FailedNeedsStart" message="Cannot determine if job needs to be started: too many missed start time (> 100). Set or decrease .spec.startingDeadlineSeconds or check clock skew"
Labels (2)
0 Karma

karthiklen
Explorer

Thanks much @gcusello 

Yeah, your understand is correct. Need to extract the required fields(namespace, last seen, type, reason, object, message) from the sample log and use those fields to create different new panels.

The search query which you provided seems promising and helpful. 

Does below query looks good?

index=log-135473-prod event.go NOT "l0.ms.com" | rex field=_raw ^\<\d+\>(?<last_seen>\w+\s+\d+\s+\d+:\d+:\d+).*namespace_name\=(?<namespace_name>[^,]+),\s+container_name\=(?<container_name>[^,]+),\s+pod_name\=(?<pod_name>[^,]+),\s+message\=(?<message1>[^\]]+).*object\=\"(?<object>[^\"]+)\"\s+kind\=\"(?<kind>[^\"]+)\"\s+apiVersion\=\"(?<apiVersion>[^\"]+)\"\s+type\=\"(?<type>[^\"]+)\"\s+reason\=\"(?<reason>[^\"]+)\"\s+message\=\"(?<message2>[^\"]+)\"

Also, How can i get new fields named cluster_namespace= openshift-logging and cluster_podname=elasticsearch-im-infra from below field? considering "/" as a separator here

object="openshift-logging/elasticsearch-im-infra

0 Karma

karthiklen
Explorer

@gcusello @isoutamo  Thanks a lot. It helped to filter out the following info(views into failed mounts and types of failures(views into failed mounts and types of failures). However, im scratching my head to get the following info from the events, but not getting any clue to filter it out.

  • Realtime views around created/started containers/pod and failures
  • Realtime views on image pulls, success, backoffs, failures, denies

How can i attach the events list csv file here?

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @karthiklen,

it isn't so easy to have a real time view taking logs from csv, maybe you should rethink the log ingestyion way!

Anyway, viewing your few events I cannot identify the creation/starting/failure events.

If you could share some events for each kind of event I could help you more.

Anyway, you could use the stats command to group events for each contaner/pod and use the eval to count only the ones for creation or failure events, so e.g. if the field where you can define the kind of event is "type": type=error for failures type=starting for start and so on, you could run something like this:

index=log-135473-prod event.go NOT "l0.ms.com" 
| rex field=_raw ^\<\d+\>(?<last_seen>\w+\s+\d+\s+\d+:\d+:\d+).*namespace_name\=(?<namespace_name>[^,]+),\s+container_name\=(?<container_name>[^,]+),\s+pod_name\=(?<pod_name>[^,]+),\s+message\=(?<message1>[^\]]+).*object\=\"(?<object>[^\"]+)\"\s+kind\=\"(?<kind>[^\"]+)\"\s+apiVersion\=\"(?<apiVersion>[^\"]+)\"\s+type\=\"(?<type>[^\"]+)\"\s+reason\=\"(?<reason>[^\"]+)\"\s+message\=\"(?<message2>[^\"]+)\"
| rex field=object "^(?<cluster_namespace>[^\/]+)\/(?<cluster_podname>.*)"
| stats count(eval(type="error")) AS failures count(eval(type="start")) AS startings BY container_name pod_name

This is a sample to guide you.

Ciao.

Giuseppe

karthiklen
Explorer

Here are some events with Failed/SuccessfulCreate. But the challenge is that we need to filter out and make a stats of the events 'Failed/SuccessfulCreate' of kind= Replicaset/statefulset/Deployment/Daemonset.

Attached the raw events from one of the kubernetes cluster.  The basic idea is get the stats of pod/containers failures/create statistics in splunk like we get from 'kubectl get events'

 

<135>Jan 6 10:39:26 control1.ai1-dev.dd.k8s.c0.ms.com kubernetes.var.log.containers.ku: namespace_name=openshift-kube-controller-manager, container_name=kube-controller-manager, pod_name=kube-controller-manager-control1.ai1-dev.dd.k8s.c0.ms.com, message=I0106 10:38:56.512561 1 event.go:291] "Event occurred" object="clp-monitoring/loki-distributed-gateway-6bcfd9dc99" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: admission webhook \"endorse-validating-webhook.ai1-dev.dd.k8s.c0.ms.com\" denied the request: Denying image infra1.kod.ms.com:5000/nginxinc/nginx-unprivileged:1.19-alpine from unrecognized image registry infra1.kod.ms.com:5000."

<135>Jan 6 10:39:26 control1.ai1-dev.dd.k8s.c0.ms.com kubernetes.var.log.containers.ku: namespace_name=openshift-kube-controller-manager, container_name=kube-controller-manager, pod_name=kube-controller-manager-control1.ai1-dev.dd.k8s.c0.ms.com, message=I0106 10:38:56.500812 1 event.go:291] "Event occurred" object="loki-distributed/loki-loki-distributed-gateway-599d76c47c" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"loki-loki-distributed-gateway-599d76c47c-\" is forbidden: unable to validate against any security context constraint: [provider restricted: .spec.securityContext.fsGroup: Invalid value: []int64{1001160000}: 1001160000 is not an allowed group spec.containers[0].securityContext.runAsUser: Invalid value: 1001160000: must be in the ranges: [1001040000, 1001049999]]"


<135>Jan 6 10:39:26 control1.ai1-dev.dd.k8s.c0.ms.com kubernetes.var.log.containers.ku: namespace_name=openshift-kube-controller-manager, container_name=kube-controller-manager, pod_name=kube-controller-manager-control1.ai1-dev.dd.k8s.c0.ms.com, message=I0106 10:38:56.499675 1 event.go:291] "Event occurred" object="loki-distributed/loki-loki-distributed-distributor-c886b96fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"loki-loki-distributed-distributor-c886b96fc-\" is forbidden: unable to validate against any security context constraint: [provider restricted: .spec.securityContext.fsGroup: Invalid value: []int64{1001160000}: 1001160000 is not an allowed group spec.containers[0].securityContext.runAsUser: Invalid value: 1001160000: must be in the ranges: [1001040000, 1001049999]]"

<135>Jan 6 10:36:51 control1.app9.hz.k8s.c0.ms.com kubernetes.var.log.containers.ku: namespace_name=openshift-kube-controller-manager, container_name=kube-controller-manager, pod_name=kube-controller-manager-control1.app9.hz.k8s.c0.ms.com, message=I0106 10:36:10.686055 1 event.go:291] "Event occurred" object="tigera-dex/tigera-dex-9d895b785" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: tigera-dex-9d895b785-9jdgv"

<135>Jan 6 10:19:08 control3.stepping-stone1-dev.dd.k8s.c0.ms.com kubernetes.var.log.containers.ku: namespace_name=openshift-kube-controller-manager, container_name=kube-controller-manager, pod_name=kube-controller-manager-control3.stepping-stone1-dev.dd.k8s.c0.ms.com, message=I0106 10:18:48.721499 1 event.go:291] "Event occurred" object="git-mirror/git-mirror-morgan-stanley-cloud-git-mirror-0" kind="Pod" apiVersion="v1" type="Warning" reason="FailedAttachVolume" message="AttachVolume.Attach failed for volume \"pvc-9361ced0-07fe-4212-9e7d-9efdc6369fd0\" : CSINode dd9002c17n1.nodes.c0.ms.com does not contain driver csi.trident.netapp.io"

<135>Jan 6 14:04:23 control3.ai2-dev.dd.k8s.c0.ms.com fluentd: docker:{"container_id"=>"cd60f994892219216651d53275d0eb4a1d1fee53cfd6f4ba50c48711297ee0d3"} kubernetes:{"container_name"=>"kube-controller-manager", "namespace_name"=>"openshift-kube-controller-manager", "pod_name"=>"kube-controller-manager-control3.ai2-dev.dd.k8s.c0.ms.com", "pod_id"=>"8429ce46-b305-4691-9258-98a7acb24e39", "host"=>"control3.ai2-dev.dd.k8s.c0.ms.com", "master_url"=>"https://kubernetes.default.svc", "namespace_id"=>"13d0f6f3-67a7-4f90-90b5-20f0311a4c9c", "namespace_labels"=>{"openshift_io/cluster-monitoring"=>"true", "openshift_io/run-level"=>"0"}, :flat_labels=>["app=kube-controller-manager", "kube-controller-manager=true", "revision=15"]} message:I0106 14:04:21.415065 1 event.go:291] "Event occurred" object="cps/prometheus-xiaomin-test-o11y-prometheus-server-6c65f45c79" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"prometheus-xiaomin-test-o11y-prometheus-server-6c65f45c79-\" is forbidden: unable to validate against any security context constraint: [provider restricted: .spec.securityContext.fsGroup: Invalid value: []int64{65534}: 65534 is not an allowed group pod.metadata.annotations.seccomp.security.alpha.kubernetes.io/pod: Forbidden: seccomp may not be set spec.containers[0].securityContext.runAsUser: Invalid value: 65535: must be in the ranges: [1000840000, 1000849999] pod.metadata.annotations.container.seccomp.security.alpha.kubernetes.io/o11y-prometheus-server: Forbidden: seccomp may not be set]" level:unknown hostname:control3.ai2-dev.dd.k8s.c0.ms.com pipeline_metadata:{"collector"=>{"ipaddr4"=>"10.85.166.220", "inputname"=>"fluent-plugin-systemd", "name"=>"fluentd", "received_at"=>"2022-01-06T14:04:22.323401+00:00", "version"=>"1.7.4 1.6.0"}} @timestamp:2022-01-06T14:04:21.415092+00:00 viaq_index_name:infra-write viaq_msg_id:ZThjZjliMzYtZWY4NS00N2FmLWE5MTgtOGRmMTY4NWQ1MmMw

0 Karma

karthiklen
Explorer

Appreciate any suggestions on this please.

 

0 Karma

isoutamo
SplunkTrust
SplunkTrust

I'm not sure if this is suitable, but you could try this.

| makeresults
| eval _raw="<135>Jan 6 10:39:26 control1.ai1-dev.dd.k8s.c0.ms.com kubernetes.var.log.containers.ku: namespace_name=openshift-kube-controller-manager, container_name=kube-controller-manager, pod_name=kube-controller-manager-control1.ai1-dev.dd.k8s.c0.ms.com, message=I0106 10:38:56.512561 1 event.go:291] \"Event occurred\" object=\"clp-monitoring/loki-distributed-gateway-6bcfd9dc99\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: admission webhook \\\"endorse-validating-webhook.ai1-dev.dd.k8s.c0.ms.com\\\" denied the request: Denying image infra1.kod.ms.com:5000/nginxinc/nginx-unprivileged:1.19-alpine from unrecognized image registry infra1.kod.ms.com:5000.\"
<135>Jan 6 10:39:26 control1.ai1-dev.dd.k8s.c0.ms.com kubernetes.var.log.containers.ku: namespace_name=openshift-kube-controller-manager, container_name=kube-controller-manager, pod_name=kube-controller-manager-control1.ai1-dev.dd.k8s.c0.ms.com, message=I0106 10:38:56.500812 1 event.go:291] \"Event occurred\" object=\"loki-distributed/loki-loki-distributed-gateway-599d76c47c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"loki-loki-distributed-gateway-599d76c47c-\\\" is forbidden: unable to validate against any security context constraint: [provider restricted: .spec.securityContext.fsGroup: Invalid value: []int64{1001160000}: 1001160000 is not an allowed group spec.containers[0].securityContext.runAsUser: Invalid value: 1001160000: must be in the ranges: [1001040000, 1001049999]]\"
<135>Jan 6 10:39:26 control1.ai1-dev.dd.k8s.c0.ms.com kubernetes.var.log.containers.ku: namespace_name=openshift-kube-controller-manager, container_name=kube-controller-manager, pod_name=kube-controller-manager-control1.ai1-dev.dd.k8s.c0.ms.com, message=I0106 10:38:56.499675 1 event.go:291] \"Event occurred\" object=\"loki-distributed/loki-loki-distributed-distributor-c886b96fc\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"loki-loki-distributed-distributor-c886b96fc-\\\" is forbidden: unable to validate against any security context constraint: [provider restricted: .spec.securityContext.fsGroup: Invalid value: []int64{1001160000}: 1001160000 is not an allowed group spec.containers[0].securityContext.runAsUser: Invalid value: 1001160000: must be in the ranges: [1001040000, 1001049999]]\"
<135>Jan 6 10:36:51 control1.app9.hz.k8s.c0.ms.com kubernetes.var.log.containers.ku: namespace_name=openshift-kube-controller-manager, container_name=kube-controller-manager, pod_name=kube-controller-manager-control1.app9.hz.k8s.c0.ms.com, message=I0106 10:36:10.686055 1 event.go:291] \"Event occurred\" object=\"tigera-dex/tigera-dex-9d895b785\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: tigera-dex-9d895b785-9jdgv\"
<135>Jan 6 10:19:08 control3.stepping-stone1-dev.dd.k8s.c0.ms.com kubernetes.var.log.containers.ku: namespace_name=openshift-kube-controller-manager, container_name=kube-controller-manager, pod_name=kube-controller-manager-control3.stepping-stone1-dev.dd.k8s.c0.ms.com, message=I0106 10:18:48.721499 1 event.go:291] \"Event occurred\" object=\"git-mirror/git-mirror-morgan-stanley-cloud-git-mirror-0\" kind=\"Pod\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedAttachVolume\" message=\"AttachVolume.Attach failed for volume \\\"pvc-9361ced0-07fe-4212-9e7d-9efdc6369fd0\\\" : CSINode dd9002c17n1.nodes.c0.ms.com does not contain driver csi.trident.netapp.io\"
<135>Jan 6 14:04:23 control3.ai2-dev.dd.k8s.c0.ms.com fluentd: docker:{\"container_id\"=>\"cd60f994892219216651d53275d0eb4a1d1fee53cfd6f4ba50c48711297ee0d3\"} kubernetes:{\"container_name\"=>\"kube-controller-manager\", \"namespace_name\"=>\"openshift-kube-controller-manager\", \"pod_name\"=>\"kube-controller-manager-control3.ai2-dev.dd.k8s.c0.ms.com\", \"pod_id\"=>\"8429ce46-b305-4691-9258-98a7acb24e39\", \"host\"=>\"control3.ai2-dev.dd.k8s.c0.ms.com\", \"master_url\"=>\"https://kubernetes.default.svc\", \"namespace_id\"=>\"13d0f6f3-67a7-4f90-90b5-20f0311a4c9c\", \"namespace_labels\"=>{\"openshift_io/cluster-monitoring\"=>\"true\", \"openshift_io/run-level\"=>\"0\"}, :flat_labels=>[\"app=kube-controller-manager\", \"kube-controller-manager=true\", \"revision=15\"]} message:I0106 14:04:21.415065 1 event.go:291] \"Event occurred\" object=\"cps/prometheus-xiaomin-test-o11y-prometheus-server-6c65f45c79\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"prometheus-xiaomin-test-o11y-prometheus-server-6c65f45c79-\\\" is forbidden: unable to validate against any security context constraint: [provider restricted: .spec.securityContext.fsGroup: Invalid value: []int64{65534}: 65534 is not an allowed group pod.metadata.annotations.seccomp.security.alpha.kubernetes.io/pod: Forbidden: seccomp may not be set spec.containers[0].securityContext.runAsUser: Invalid value: 65535: must be in the ranges: [1000840000, 1000849999] pod.metadata.annotations.container.seccomp.security.alpha.kubernetes.io/o11y-prometheus-server: Forbidden: seccomp may not be set]\" level:unknown hostname:control3.ai2-dev.dd.k8s.c0.ms.com pipeline_metadata:{\"collector\"=>{\"ipaddr4\"=>\"10.85.166.220\", \"inputname\"=>\"fluent-plugin-systemd\", \"name\"=>\"fluentd\", \"received_at\"=>\"2022-01-06T14:04:22.323401+00:00\", \"version\"=>\"1.7.4 1.6.0\"}} @timestamp:2022-01-06T14:04:21.415092+00:00 viaq_index_name:infra-write viaq_msg_id:ZThjZjliMzYtZWY4NS00N2FmLWE5MTgtOGRmMTY4NWQ1MmMw"
| multikv noheader=t
| extract
| rex "^<\d+>(?<time>\w+\s+\d+\s+\d+:\d+:\d+)\s+(?<host>[^\s]+)"
| eval _time = strptime(time, "%b %e %H:%M:%S")
| fields - time Column*
| table _time host kind reason * _raw
``` Above makes test events ```
| stats count by kind reason

 

Maybe you should try some other way to ingest that data as @gcusello proposed. In splunkbase there seems to be at least two different apps/TAs to analyse and monitor k8s logs.

r. Ismo

0 Karma

karthiklen
Explorer

Thanks @isoutamo  

I have already got each fields extracted. Now, i have challenges with filtering out exact events related to pod and make the stats of pod/containers failures/create statistics in splunk like we get from 'kubectl get events'.

@gcusello Do we have any apps/TAs to analyze and monitor k8s logs? All i need is a stats/table of events related to pod(with failures,created, imagepullbackoff, etc..) and of kind=Repliacaset/statefulset/deployment.

Attached the raw events from one of the kubernetes cluster.  

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @karthiklen,

search in apps.splunk.com the Apps and TAs for Azure, probably you'll find what you need.

E,g, there's the Microsoft Azure App for Splunk (https://splunkbase.splunk.com/app/4882/) that should answer to your need, put attention to the requirements of this app especially in terms of add-ons to install and how to configure it.

Ciao.

Giuseppe

0 Karma

karthiklen
Explorer

Thanks @gcusello 

Just understand that my firm doesnt allow to install external apps to achieve this.

Shall i request for a search query in secure-splunk itself to achieve this? Like i explained earlier, Just need a stats for pod health from the events already present in splunk.

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @karthiklen,

even if you cannot install new apps or TAs, you can find in those apps the configurations and dashboards useful for your requirements, instead of to study Azure logs!

Ciao.

Giuseppe

0 Karma

isoutamo
SplunkTrust
SplunkTrust
Maybe you could download it to your own workstation and then look it and use it as a "source of your inspiration" ;-?
0 Karma

isoutamo
SplunkTrust
SplunkTrust

Hi

You can get those with this

| makeresults
| eval _raw="<135>Dec 30 06:59:07 9000n2.nodes.com kubernetes.var.log.containers.ku: namespace_name=openshift-kube-controller-manager, container_name=kube-controller-manager, pod_name=kube-controller-manager-9000n2.nodes.com, message=I1230 06:58:56.139184 1 event.go:291] \"Event occurred\" object=\"openshift-logging/elasticsearch-im-infra\" kind=\"CronJob\" apiVersion=\"batch/v1beta1\" type=\"Warning\" reason=\"FailedNeedsStart\" message=\"Cannot determine if job needs to be started: too many missed start time (> 100). Set or decrease .spec.startingDeadlineSeconds or check clock skew\"
<135>Dec 30 06:59:07 9000n2.nodes.com kubernetes.var.log.containers.ku: namespace_name=openshift-kube-controller-manager, container_name=kube-controller-manager, pod_name=kube-controller-manager-9000n2.nodes.com, message=I1230 06:58:56.133312 1 event.go:291] \"Event occurred\" object=\"openshift-logging/elasticsearch-im-audit\" kind=\"CronJob\" apiVersion=\"batch/v1beta1\" type=\"Warning\" reason=\"FailedNeedsStart\" message=\"Cannot determine if job needs to be started: too many missed start time (> 100). Set or decrease .spec.startingDeadlineSeconds or check clock skew\""
| multikv noheader=t
``` above generate sample data based on your example. You should change this to your base query```

| rex field=_raw "^\<\d+\>(?<last_seen>\w+\s+\d+\s+\d+:\d+:\d+).* namespace_name=(?<namespace_name>[^,]+),\s+container_name=(?<container_name>[^,]+),\s+pod_name=(?<pod_name>[^,]+),\s+message=(?<message1>[^\]]+).*object=\"(?<object>[^\"]+)\"\s+kind=\"(?<kind>[^\"]+)\"\s+apiVersion=\"(?<apiVersion>[^\"]+)\"\s+type=\"(?<type>[^\"]+)\"\s+reason=\"(?<reason>[^\"]+)\"\s+message=\"(?<message2>[^\"]+)\""
| rex field=object "(?<cluster_namespace>[^/]+)/(?<cluster_podname>(.*))"
| table namespace_name, last_seen, type, reason, object, cluster_namespace, cluster_podname,message1, message2

r. Ismo 

gcusello
SplunkTrust
SplunkTrust

Hi @karthiklen,

if you're interested to maintain the object field, you could use something like this:

 

index=log-135473-prod event.go NOT "l0.ms.com" 
| rex field=_raw ^\<\d+\>(?<last_seen>\w+\s+\d+\s+\d+:\d+:\d+).*namespace_name\=(?<namespace_name>[^,]+),\s+container_name\=(?<container_name>[^,]+),\s+pod_name\=(?<pod_name>[^,]+),\s+message\=(?<message1>[^\]]+).*object\=\"(?<object>[^\"]+)\"\s+kind\=\"(?<kind>[^\"]+)\"\s+apiVersion\=\"(?<apiVersion>[^\"]+)\"\s+type\=\"(?<type>[^\"]+)\"\s+reason\=\"(?<reason>[^\"]+)\"\s+message\=\"(?<message2>[^\"]+)\"
| rex field=object "^(?<cluster_namespace>[^\/]+)\/(?<cluster_podname>.*)"

 

or, otherwise, if you want to use one single regex, you could use:

 

^\<\d+\>(?<last_seen>\w+\s+\d+\s+\d+:\d+:\d+).*namespace_name\=(?<namespace_name>[^,]+),\s+container_name\=(?<container_name>[^,]+),\s+pod_name\=(?<pod_name>[^,]+),\s+message\=(?<message1>[^\]]+).*object\=\"(?<cluster_namespace>[^\/]+)\/(?<cluster_podname>[^\"]+)\"\s+kind\=\"(?<kind>[^\"]+)\"\s+apiVersion\=\"(?<apiVersion>[^\"]+)\"\s+type\=\"(?<type>[^\"]+)\"\s+reason\=\"(?<reason>[^\"]+)\"\s+message\=\"(?<message2>[^\"]+)\"

 

That you can test at https://regex101.com/r/DyPs7h/1

Ciao.

Giuseppe

P.S.: Karma Points are appreciated 😉

gcusello
SplunkTrust
SplunkTrust

Hi @karthiklen,

let me understand: you have problems to extract the needed fields from your logs or what else?

If this is your need, please try this regex to create a field extraction to use in all your panels:

^\<\d+\>(?<last_seen>\w+\s+\d+\s+\d+:\d+:\d+).*namespace_name\=(?<namespace_name>[^,]+),\s+container_name\=(?<container_name>[^,]+),\s+pod_name\=(?<pod_name>[^,]+),\s+message\=(?<message1>[^\]]+).*object\=\"(?<object>[^\"]+)\"\s+kind\=\"(?<kind>[^\"]+)\"\s+apiVersion\=\"(?<apiVersion>[^\"]+)\"\s+type\=\"(?<type>[^\"]+)\"\s+reason\=\"(?<reason>[^\"]+)\"\s+message\=\"(?<message2>[^\"]+)\"

That you can test at https://regex101.com/r/OmVEZl/1

Ciao.

Giuseppe

0 Karma
Get Updates on the Splunk Community!

Federated Search for Amazon S3 | Key Use Cases to Streamline Compliance Workflows

Modern business operations are supported by data compliance. As regulations evolve, organizations must ...

New Dates, New City: Save the Date for .conf25!

Wake up, babe! New .conf25 dates AND location just dropped!! That's right, this year, .conf25 is taking place ...

Introduction to Splunk Observability Cloud - Building a Resilient Hybrid Cloud

Introduction to Splunk Observability Cloud - Building a Resilient Hybrid Cloud  In today’s fast-paced digital ...