All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

hi I need to color the field "sante" in red if his value is "Etat dégradé" and green if his value os "Etat stable"     | stats count(hang_process_name) AS hang | eval sante=if(hang>0, "Etat dégr... See more...
hi I need to color the field "sante" in red if his value is "Etat dégradé" and green if his value os "Etat stable"     | stats count(hang_process_name) AS hang | eval sante=if(hang>0, "Etat dégradé", "Etat stable") | table sante | rangemap field=sante low=0-0 default=severe     what is wrong please?
Hi, We are setting up a Splunk infrastructure where we would like to redirect event coming in particular indexes to an external SOC. For example, logs from multiple firewall technologies would be p... See more...
Hi, We are setting up a Splunk infrastructure where we would like to redirect event coming in particular indexes to an external SOC. For example, logs from multiple firewall technologies would be put into the index "clientX_firewall" by an SC4S and this whole index would have to be forwarded to both my indexing tier and the external SOC, whatever the sourcetype / host / source. Is there a way to properly redirect this whole index ? Without having to specify the source / host / sourcetype involved for each type involved ?  Thanks for your help.
Hi, I have an SBC (Session Board Controller) which is doing LDAP search and write the syslog of that. I'm trying to get statistics of how long time the searches has been taken during the day. Based... See more...
Hi, I have an SBC (Session Board Controller) which is doing LDAP search and write the syslog of that. I'm trying to get statistics of how long time the searches has been taken during the day. Based on the forums discussions I end to the following search string already: "recv <-- acEV_LDAP_SEARCH_RESULT" OR "send --> LDAP SearchID" | transaction SID | table by duration So this is good and working. The extra challenge comes, when I'm not interest of all the LDAP searches only those which do have certain search (contains phone number). I tried to change the search like this: "recv <-- acEV_LDAP_SEARCH_RESULT" OR "send --> LDAP SearchID:-100 key:msRTCSIP-Line=tel:+" | transaction SID keeporphans=false  | table by duration With the idea that if there is no pair for "recv <-- acEV_LDAP_SEARCH_RESULT", then that should be skipped. But so far, no luck And alternative way I tried to use third log line as well by: "recv <-- acEV_LDAP_SEARCH_RESULT" OR "send --> LDAP SearchID" OR "Query LDAP for msRTCSIP-Line=tel:+" | transaction SID | table by duration But that did not work either to me. The SID is the thread ID (or session ID) to unify each others. Anybody have thoughts how this could be done? Can I control Transaction in a way that both (or three) log lines are mandatory?
Hello,  We have a PowerShell script job ( xyz.ps1 ) run on all hosts every 10 minutes and when it starts write message in to EV Application log as "Beginning of xyz.ps1 Execution " , We found someti... See more...
Hello,  We have a PowerShell script job ( xyz.ps1 ) run on all hosts every 10 minutes and when it starts write message in to EV Application log as "Beginning of xyz.ps1 Execution " , We found sometime that xyz.ps1 gets stuck into weird state and we didnt see message in last 60 minutes for some hosts. I was able to create alert where i get list of hosts which shows that message. But I am exactly looking for : I want to set an alert in splunk which will report host name where we dont see "Beginning of xyz.ps1 Execution" message in last 60 minutes , So that I'll get to know these hosts where script didnt execute well. search:   index= ABC source="xyz.ps1" host = WWW-*  "Beginning of xyz.ps1 Execution" | table _time host | dedup host | eval age=now()-_time | where age > 60 Is above search is correct ? Thanks for your suggestions
Hi Team, We are considering to adopt splunk-connect-for-syslog app (SC4S). Can it be install on Splunk UF or only on Splunk Heavy Forwarder?
Hello, I would like to know if it is safe to delete below on all of our Splunk hosts: /opt/splunk/var/run/searchpeers/<hostname>-1633305600/apps/splunk_archiver/java-bin/jars/vendors/spark/3.0.1/lib... See more...
Hello, I would like to know if it is safe to delete below on all of our Splunk hosts: /opt/splunk/var/run/searchpeers/<hostname>-1633305600/apps/splunk_archiver/java-bin/jars/vendors/spark/3.0.1/lib/ Similar files exist on a lot of our Splunk hosts and we get notifications daily about them because of log4j. So is it safe to delete the above path and similar? It is just replications right? Thanks in advance!
I wanted  to drop the log message in syslog-ng and i tried the below way to drop them but seems it doesn't work.  could you help if there is any other way.  To skip the processing of a message witho... See more...
I wanted  to drop the log message in syslog-ng and i tried the below way to drop them but seems it doesn't work.  could you help if there is any other way.  To skip the processing of a message without sending it to a destination, create a log statement with the appropriate filters, but do not include any destination in the statement, and use the final flag. Example: Skipping messages The following log statement drops all debug level messages without any further processing. filter demo_debugfilter { level(debug); }; log { source(s_all); filter(demo_debugfilter); flags(final); };
I have a list of IP addresses in a lookup table that are network scanners. I am trying to build a search that excludes the IP addresses in this lookup table, but for some reason my search keeps incl... See more...
I have a list of IP addresses in a lookup table that are network scanners. I am trying to build a search that excludes the IP addresses in this lookup table, but for some reason my search keeps including IP address values that are clearly present in the lookup.  I tried putting the quotes around the IP addresses ("1.2.3.4"), tried without quotes (1.2.3.4) but nothing works.  The raw data does not have quotes.  After having tried enough  combinations, I am hoping someone can help me.  Eventually, I'll be adding the remaining IP's to the lookup table via OUTPUTLOOKUP append=true but until I can get this working... I'm stuck.   index=foo sourcetype=bar NOT [| inputlookup network_scanners | table IpAddress] | dedup IpAddress | table IpAddress  
Hello, I am having a hard time doing the installation and running/connecting the windows agent to the portal I am having challenges locating working documentation. When I look at pre-requisites,... See more...
Hello, I am having a hard time doing the installation and running/connecting the windows agent to the portal I am having challenges locating working documentation. When I look at pre-requisites, I see some broken links to downloads like the Java  https://docs.appdynamics.com/4.5.x/en/application-monitoring/install-app-server-agents/java-agent/install-the-java-agent I used this document but seems stuck and I have no progress: https://docs.appdynamics.com/4.5.x/en/infrastructure-visibility/network-visibility/network-visibility-requirements-and-supported-environments Also if someone has any video with the correct link that would be a plus Good day
index=VulnerabilityManagement Sourcetype=* |fields dept=HR      vuln=*      PC=* |I want statistics showing a list of  HRs vulnerabilities and the associated PC. I'm new, hopefully this makes ... See more...
index=VulnerabilityManagement Sourcetype=* |fields dept=HR      vuln=*      PC=* |I want statistics showing a list of  HRs vulnerabilities and the associated PC. I'm new, hopefully this makes sense. I just want a basic statistics page that I can put on a dashboard showing the list of PC vulnerabilities in this dept. And remove any rows that are missing either the vulnerability or PC. The statistics would show: Vulnerabilities                            PC CVE-Malware Boogy              CEOPC1234
Hi, I have a search that produce the following table   Organization|Amount|AcquirerBank Or_A |2000 |1234 Or_A |4000 |2345 Or_B |1200 |3456 |4020 |4567 Or_C ... See more...
Hi, I have a search that produce the following table   Organization|Amount|AcquirerBank Or_A |2000 |1234 Or_A |4000 |2345 Or_B |1200 |3456 |4020 |4567 Or_C |1456 |5678   And then I have a csv file that provide the bank code with the bank name as a mapping csv as   AcquirerBank|BankName 1234 |BankA 2345 |BankB 4567 |BankC 5678 |BankD   The target table should look something like this   Organization|Amount|AcquirerBank|BankName Or_A |2000 |1234 |BankA Or_A |4000 |2345 |BankB Or_B |1200 |3456 | |4020 |4567 |BankC Or_C |1456 |5678 |BankD    I try to use join like this   index=index |table Organization, Amount, AcquirerBank |join AcquirerBank [inputlookup bank_mapping.csv |table AcquirerBank, BankName] |table Organization, Amount, AcquirerBank, BankName   But I encounter 2 problems: 1. My index have around a million events, and [join] have a limited number of events it can join, so my result table was lack in result. 2. Also [join] don't show enough results if the mapping csv don't have the data, as the example above, if I use [join], OrB with the field Acquirer that don't exist in mapping csv will not show up. Anyone have a alternative to [join] that can resolve above problems? Thank you in advance.
Hi all, im having the following error monitoring a python application. It used to work ok before.. now nothing is showing on my dashboard. all i could find was this error 00:19:02,223 ERROR [AD T... See more...
Hi all, im having the following error monitoring a python application. It used to work ok before.. now nothing is showing on my dashboard. all i could find was this error 00:19:02,223 ERROR [AD Thread Pool-ProxyAsyncMsg1] AgentProxyService - No RequestData found in the cache for request:975533271712433961. The entry might have been deleted by the timer. python37 centos 8.4 regards,c
I have a splunk query that returns results like this.  I want to modify the query such that I get the latest row for UtilityJarVersion when everything else - other column values are same.   How can I... See more...
I have a splunk query that returns results like this.  I want to modify the query such that I get the latest row for UtilityJarVersion when everything else - other column values are same.   How can I modify my query to get the result I need? BitBucket_Project MicroserviceName Env _time BitBucket_Project MicroserviceName Env UtilityJarVersion 1/13/22 4:09 PM bb-project1 microservice1 DEV 1.0.105 1/11/22 6:39 AM bb-project2 microservice2 DEV 1.0.105 1/12/22 11:22 AM bb-project2 microservice2 DEV 1.0.106 1/12/22 7:00 PM bb-project3 microservice3 DEV 1.0.106 1/12/22 9:28 AM bb-project3 microservice4 DEV 1.0.106 1/12/22 6:33 PM bb-project4 microservice5 DEV 1.0.106 1/11/22 6:40 AM bb-project5 microservice6 DEV 1.0.105 1/12/22 6:43 PM bb-project5 microservice6 DEV 1.0.106 That is, my expected result would look like _time BitBucket_Project MicroserviceNAme Env UtilityJar 1/13/22 4:09 PM bb-project1 microservice1 DEV 1.0.105 1/12/22 11:22 AM bb-project2 microservice2 DEV 1.0.106 1/12/22 7:00 PM bb-project3 microservice3 DEV 1.0.106 1/12/22 9:28 AM bb-project3 microservice4 DEV 1.0.106 1/12/22 6:33 PM bb-project4 microservice5 DEV 1.0.106 1/12/22 6:43 PM bb-project5 microservice6 DEV 1.0.106 Thank you
Hi everyone,    I would like to know if Splunk can get logs from Linux distributions run with Windows Subsystem for Linux (WSL 2). If yes, which logs and how ? Thank you !  
I have a published app on SplunkBase which is designed to pull event data via API from an App which I publish to Splunk with. It's been working fine for several years. A recent request from users is... See more...
I have a published app on SplunkBase which is designed to pull event data via API from an App which I publish to Splunk with. It's been working fine for several years. A recent request from users is for more realtime data which would require me to pull data from API. It's not really suitable for logging and where I have done this before on an iPhone app, I held in ram as opposed to being "logged". Ideally I would want to pull data once and for the response to be shared across multiple users as opposed to many users each individually polling data. Is this something that is possible with a Splunk app? 
Hey everyone! I've successfully set up a link from Splunk Connect for Kubernetes on our OpenShift environment. It outputs to a local Heavy forwarder, which then splits the data stream and sends to ... See more...
Hey everyone! I've successfully set up a link from Splunk Connect for Kubernetes on our OpenShift environment. It outputs to a local Heavy forwarder, which then splits the data stream and sends to our on-prem Splunk instance and a proof of concept Splunk Cloud instance (which we're hopefully going to be moving towards in the future). I have the system setup so that it sends most of its logs to an index called "test_ocp_logs". This covers cases in the format of [ocp:container:ContainerName]. However, I am getting a strange log into our root "test" index, which I have set up as the baseline default in the configuration.  These have the following info: source = namespace:splunkconnect/pod:splunkconnect-splunk-kubernetes-logging-XXXXX sourcetype = fluentd:monitor-agent These look like some kind of report on what the SCK system grabbed and processed, but I can't seem to find any kind of definition anywhere. Here's what one of the events looks like :   { [-] emit_records: 278304 emit_size: 0 output_plugin: false plugin_category: filter plugin_id: object:c760 retry_count: null type: jq_transformer }     So I have a few main questions: What is this log, and is it something we should care about? If we should care about this, what do the fields mean? If we should care about this, how do I direct where it goes so that I keep all my SCK/OpenShift events kept in the same index (at least for now)? For reference, this is the contents of my values.yaml for the helm chart to build SCK:   global: logLevel: info splunk: hec: host: REDACTED port: 8088 token: REDACTED protocol: indexName: test insecureSSL: true clientCert: clientKey: caFile: indexRouting: kubernetes: clusterName: "paas02-t" prometheus_enabled: monitoring_agent_enabled: monitoring_agent_index_name: serviceMonitor: enabled: false metricsPort: 24231 interval: "" scrapeTimeout: "10s" additionalLabels: { } splunk-kubernetes-logging: enabled: true logLevel: fluentd: # Resticting to APP logs only for the proof of concept path: /var/log/containers/*APP*.log exclude_path: - /var/log/containers/kube-svc-redirect*.log - /var/log/containers/tiller*.log - /var/log/containers/*_kube-system_*.log # ignoring internal Openshift Logging generated errors - /var/log/containers/*_openshift-logging_*.log containers: path: /var/log pathDest: /var/lib/docker/containers logFormatType: cri logFormat: "%Y-%m-%dT%H:%M:%S.%N%:z" refreshInterval: k8sMetadata: podLabels: - app - k8s-app - release watch: true cache_ttl: 3600 sourcetypePrefix: "ocp" rbac: create: true openshiftPrivilegedSccBinding: true serviceAccount: create: true name: splunkconnect podSecurityPolicy: create: false apparmor_security: true splunk: hec: host: port: token: protocol: indexName: test_ocp_logs insecureSSL: clientCert: clientKey: caFile: journalLogPath: /run/log/journal charEncodingUtf8: false logs: docker: from: journald: unit: docker.service timestampExtraction: regexp: time="(?<time>\d{4}-\d{2}-\d{2}T[0-2]\d:[0-5]\d:[0-5]\d.\d{9}Z)" format: "%Y-%m-%dT%H:%M:%S.%NZ" sourcetype: kube:docker kubelet: &glog from: journald: unit: kubelet.service timestampExtraction: regexp: \w(?<time>[0-1]\d[0-3]\d [^\s]*) format: "%m%d %H:%M:%S.%N" multiline: firstline: /^\w[0-1]\d[0-3]\d/ sourcetype: kube:kubelet etcd: from: pod: etcd-server container: etcd-container timestampExtraction: regexp: (?<time>\d{4}-\d{2}-\d{2} [0-2]\d:[0-5]\d:[0-5]\d\.\d{6}) format: "%Y-%m-%d %H:%M:%S.%N" etcd-minikube: from: pod: etcd-minikube container: etcd timestampExtraction: regexp: (?<time>\d{4}-\d{2}-\d{2} [0-2]\d:[0-5]\d:[0-5]\d\.\d{6}) format: "%Y-%m-%d %H:%M:%S.%N" etcd-events: from: pod: etcd-server-events container: etcd-container timestampExtraction: regexp: (?<time>\d{4}-[0-1]\d-[0-3]\d [0-2]\d:[0-5]\d:[0-5]\d\.\d{6}) format: "%Y-%m-%d %H:%M:%S.%N" kube-apiserver: <<: *glog from: pod: kube-apiserver sourcetype: kube:kube-apiserver kube-scheduler: <<: *glog from: pod: kube-scheduler sourcetype: kube:kube-scheduler kube-controller-manager: <<: *glog from: pod: kube-controller-manager sourcetype: kube:kube-controller-manager kube-proxy: <<: *glog from: pod: kube-proxy sourcetype: kube:kube-proxy kubedns: <<: *glog from: pod: kube-dns sourcetype: kube:kubedns dnsmasq: <<: *glog from: pod: kube-dns sourcetype: kube:dnsmasq dns-sidecar: <<: *glog from: pod: kube-dns container: sidecar sourcetype: kube:kubedns-sidecar dns-controller: <<: *glog from: pod: dns-controller sourcetype: kube:dns-controller kube-dns-autoscaler: <<: *glog from: pod: kube-dns-autoscaler container: autoscaler sourcetype: kube:kube-dns-autoscaler kube-audit: from: file: path: /var/log/kube-apiserver/audit.log timestampExtraction: format: "%Y-%m-%dT%H:%M:%SZ" sourcetype: kube:apiserver-audit openshift-audit: from: file: path: /var/log/openshift-apiserver/audit.log timestampExtraction: format: "%Y-%m-%dT%H:%M:%SZ" sourcetype: kube:openshift-apiserver-audit oauth-audit: from: file: path: /var/log/oauth-apiserver/audit.log timestampExtraction: format: "%Y-%m-%dT%H:%M:%SZ" sourcetype: kube:oauth-apiserver-audit resources: requests: cpu: 100m memory: 200Mi buffer: "@type": memory total_limit_size: 600m chunk_limit_size: 20m chunk_limit_records: 100000 flush_interval: 5s flush_thread_count: 1 overflow_action: block retry_max_times: 5 retry_type: periodic sendAllMetadata: false nodeSelector: node-role.kubernetes.io/app: '' affinity: {} extraVolumes: [] extraVolumeMounts: [] priorityClassName: kubernetes: securityContext: true splunk-kubernetes-objects: enabled: false splunk-kubernetes-metrics: enabled: false  
Good Afternoon,  So I've recently been hired on as a Splunk admin/analyst.  The scope of my job really relies on my being able to know how to look things up in the search box.  I really need to get ... See more...
Good Afternoon,  So I've recently been hired on as a Splunk admin/analyst.  The scope of my job really relies on my being able to know how to look things up in the search box.  I really need to get proficient in knowing how to search for things after loading my data/files.     So my question is this- Where can I go to get some more hands on practice to better my SPL (Splunk search) skills.     Thank you,  
I am using a scheduled report to save data to a summary index with the following query: index=_internal | stats count by status  | collect index=test_index addtime=true testmode=true marker="sch_rpt... See more...
I am using a scheduled report to save data to a summary index with the following query: index=_internal | stats count by status  | collect index=test_index addtime=true testmode=true marker="sch_rpt_name=Test_Report"  It outputs a _raw value like this : 01/12/2022 20:00:00 +0000, info_min_time=1642017600.000, info_max_time=1642106259.000, info_search_time=1642106259.959, count=63985, status=200, scheduled_report_test=Test_report Is there a way to get rid of the info_search_time field?
Hello, I know ....SPLUNK needs to have UTF-8 data format to ingest data into SPLUNK. But I have some XML files with UTF-16 format. Are there any ways we can ingest UTF-16 formatted files? Any help w... See more...
Hello, I know ....SPLUNK needs to have UTF-8 data format to ingest data into SPLUNK. But I have some XML files with UTF-16 format. Are there any ways we can ingest UTF-16 formatted files? Any help will be appreciated ...thank you so much.
I have a distributed Splunk environment, meaning a SHC and IDX cluster connected via distributed search as outlined in the Splunk docs. I have a Splunk Cloud free trial, and was wanting to try out fe... See more...
I have a distributed Splunk environment, meaning a SHC and IDX cluster connected via distributed search as outlined in the Splunk docs. I have a Splunk Cloud free trial, and was wanting to try out federated search to link the on-prem indexers to the cloud SH. However, I cannot get it to work. Has anyone accomplished this before? How the docs outline it to be is that you place the federated search provider pointing at a SH rather than a IDX, and is there ports that need to be opened on the Cloud side?