All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all, im having the following error monitoring a python application. It used to work ok before.. now nothing is showing on my dashboard. all i could find was this error 00:19:02,223 ERROR [AD T... See more...
Hi all, im having the following error monitoring a python application. It used to work ok before.. now nothing is showing on my dashboard. all i could find was this error 00:19:02,223 ERROR [AD Thread Pool-ProxyAsyncMsg1] AgentProxyService - No RequestData found in the cache for request:975533271712433961. The entry might have been deleted by the timer. python37 centos 8.4 regards,c
I have a splunk query that returns results like this.  I want to modify the query such that I get the latest row for UtilityJarVersion when everything else - other column values are same.   How can I... See more...
I have a splunk query that returns results like this.  I want to modify the query such that I get the latest row for UtilityJarVersion when everything else - other column values are same.   How can I modify my query to get the result I need? BitBucket_Project MicroserviceName Env _time BitBucket_Project MicroserviceName Env UtilityJarVersion 1/13/22 4:09 PM bb-project1 microservice1 DEV 1.0.105 1/11/22 6:39 AM bb-project2 microservice2 DEV 1.0.105 1/12/22 11:22 AM bb-project2 microservice2 DEV 1.0.106 1/12/22 7:00 PM bb-project3 microservice3 DEV 1.0.106 1/12/22 9:28 AM bb-project3 microservice4 DEV 1.0.106 1/12/22 6:33 PM bb-project4 microservice5 DEV 1.0.106 1/11/22 6:40 AM bb-project5 microservice6 DEV 1.0.105 1/12/22 6:43 PM bb-project5 microservice6 DEV 1.0.106 That is, my expected result would look like _time BitBucket_Project MicroserviceNAme Env UtilityJar 1/13/22 4:09 PM bb-project1 microservice1 DEV 1.0.105 1/12/22 11:22 AM bb-project2 microservice2 DEV 1.0.106 1/12/22 7:00 PM bb-project3 microservice3 DEV 1.0.106 1/12/22 9:28 AM bb-project3 microservice4 DEV 1.0.106 1/12/22 6:33 PM bb-project4 microservice5 DEV 1.0.106 1/12/22 6:43 PM bb-project5 microservice6 DEV 1.0.106 Thank you
Hi everyone,    I would like to know if Splunk can get logs from Linux distributions run with Windows Subsystem for Linux (WSL 2). If yes, which logs and how ? Thank you !  
I have a published app on SplunkBase which is designed to pull event data via API from an App which I publish to Splunk with. It's been working fine for several years. A recent request from users is... See more...
I have a published app on SplunkBase which is designed to pull event data via API from an App which I publish to Splunk with. It's been working fine for several years. A recent request from users is for more realtime data which would require me to pull data from API. It's not really suitable for logging and where I have done this before on an iPhone app, I held in ram as opposed to being "logged". Ideally I would want to pull data once and for the response to be shared across multiple users as opposed to many users each individually polling data. Is this something that is possible with a Splunk app? 
Hey everyone! I've successfully set up a link from Splunk Connect for Kubernetes on our OpenShift environment. It outputs to a local Heavy forwarder, which then splits the data stream and sends to ... See more...
Hey everyone! I've successfully set up a link from Splunk Connect for Kubernetes on our OpenShift environment. It outputs to a local Heavy forwarder, which then splits the data stream and sends to our on-prem Splunk instance and a proof of concept Splunk Cloud instance (which we're hopefully going to be moving towards in the future). I have the system setup so that it sends most of its logs to an index called "test_ocp_logs". This covers cases in the format of [ocp:container:ContainerName]. However, I am getting a strange log into our root "test" index, which I have set up as the baseline default in the configuration.  These have the following info: source = namespace:splunkconnect/pod:splunkconnect-splunk-kubernetes-logging-XXXXX sourcetype = fluentd:monitor-agent These look like some kind of report on what the SCK system grabbed and processed, but I can't seem to find any kind of definition anywhere. Here's what one of the events looks like :   { [-] emit_records: 278304 emit_size: 0 output_plugin: false plugin_category: filter plugin_id: object:c760 retry_count: null type: jq_transformer }     So I have a few main questions: What is this log, and is it something we should care about? If we should care about this, what do the fields mean? If we should care about this, how do I direct where it goes so that I keep all my SCK/OpenShift events kept in the same index (at least for now)? For reference, this is the contents of my values.yaml for the helm chart to build SCK:   global: logLevel: info splunk: hec: host: REDACTED port: 8088 token: REDACTED protocol: indexName: test insecureSSL: true clientCert: clientKey: caFile: indexRouting: kubernetes: clusterName: "paas02-t" prometheus_enabled: monitoring_agent_enabled: monitoring_agent_index_name: serviceMonitor: enabled: false metricsPort: 24231 interval: "" scrapeTimeout: "10s" additionalLabels: { } splunk-kubernetes-logging: enabled: true logLevel: fluentd: # Resticting to APP logs only for the proof of concept path: /var/log/containers/*APP*.log exclude_path: - /var/log/containers/kube-svc-redirect*.log - /var/log/containers/tiller*.log - /var/log/containers/*_kube-system_*.log # ignoring internal Openshift Logging generated errors - /var/log/containers/*_openshift-logging_*.log containers: path: /var/log pathDest: /var/lib/docker/containers logFormatType: cri logFormat: "%Y-%m-%dT%H:%M:%S.%N%:z" refreshInterval: k8sMetadata: podLabels: - app - k8s-app - release watch: true cache_ttl: 3600 sourcetypePrefix: "ocp" rbac: create: true openshiftPrivilegedSccBinding: true serviceAccount: create: true name: splunkconnect podSecurityPolicy: create: false apparmor_security: true splunk: hec: host: port: token: protocol: indexName: test_ocp_logs insecureSSL: clientCert: clientKey: caFile: journalLogPath: /run/log/journal charEncodingUtf8: false logs: docker: from: journald: unit: docker.service timestampExtraction: regexp: time="(?<time>\d{4}-\d{2}-\d{2}T[0-2]\d:[0-5]\d:[0-5]\d.\d{9}Z)" format: "%Y-%m-%dT%H:%M:%S.%NZ" sourcetype: kube:docker kubelet: &glog from: journald: unit: kubelet.service timestampExtraction: regexp: \w(?<time>[0-1]\d[0-3]\d [^\s]*) format: "%m%d %H:%M:%S.%N" multiline: firstline: /^\w[0-1]\d[0-3]\d/ sourcetype: kube:kubelet etcd: from: pod: etcd-server container: etcd-container timestampExtraction: regexp: (?<time>\d{4}-\d{2}-\d{2} [0-2]\d:[0-5]\d:[0-5]\d\.\d{6}) format: "%Y-%m-%d %H:%M:%S.%N" etcd-minikube: from: pod: etcd-minikube container: etcd timestampExtraction: regexp: (?<time>\d{4}-\d{2}-\d{2} [0-2]\d:[0-5]\d:[0-5]\d\.\d{6}) format: "%Y-%m-%d %H:%M:%S.%N" etcd-events: from: pod: etcd-server-events container: etcd-container timestampExtraction: regexp: (?<time>\d{4}-[0-1]\d-[0-3]\d [0-2]\d:[0-5]\d:[0-5]\d\.\d{6}) format: "%Y-%m-%d %H:%M:%S.%N" kube-apiserver: <<: *glog from: pod: kube-apiserver sourcetype: kube:kube-apiserver kube-scheduler: <<: *glog from: pod: kube-scheduler sourcetype: kube:kube-scheduler kube-controller-manager: <<: *glog from: pod: kube-controller-manager sourcetype: kube:kube-controller-manager kube-proxy: <<: *glog from: pod: kube-proxy sourcetype: kube:kube-proxy kubedns: <<: *glog from: pod: kube-dns sourcetype: kube:kubedns dnsmasq: <<: *glog from: pod: kube-dns sourcetype: kube:dnsmasq dns-sidecar: <<: *glog from: pod: kube-dns container: sidecar sourcetype: kube:kubedns-sidecar dns-controller: <<: *glog from: pod: dns-controller sourcetype: kube:dns-controller kube-dns-autoscaler: <<: *glog from: pod: kube-dns-autoscaler container: autoscaler sourcetype: kube:kube-dns-autoscaler kube-audit: from: file: path: /var/log/kube-apiserver/audit.log timestampExtraction: format: "%Y-%m-%dT%H:%M:%SZ" sourcetype: kube:apiserver-audit openshift-audit: from: file: path: /var/log/openshift-apiserver/audit.log timestampExtraction: format: "%Y-%m-%dT%H:%M:%SZ" sourcetype: kube:openshift-apiserver-audit oauth-audit: from: file: path: /var/log/oauth-apiserver/audit.log timestampExtraction: format: "%Y-%m-%dT%H:%M:%SZ" sourcetype: kube:oauth-apiserver-audit resources: requests: cpu: 100m memory: 200Mi buffer: "@type": memory total_limit_size: 600m chunk_limit_size: 20m chunk_limit_records: 100000 flush_interval: 5s flush_thread_count: 1 overflow_action: block retry_max_times: 5 retry_type: periodic sendAllMetadata: false nodeSelector: node-role.kubernetes.io/app: '' affinity: {} extraVolumes: [] extraVolumeMounts: [] priorityClassName: kubernetes: securityContext: true splunk-kubernetes-objects: enabled: false splunk-kubernetes-metrics: enabled: false  
Good Afternoon,  So I've recently been hired on as a Splunk admin/analyst.  The scope of my job really relies on my being able to know how to look things up in the search box.  I really need to get ... See more...
Good Afternoon,  So I've recently been hired on as a Splunk admin/analyst.  The scope of my job really relies on my being able to know how to look things up in the search box.  I really need to get proficient in knowing how to search for things after loading my data/files.     So my question is this- Where can I go to get some more hands on practice to better my SPL (Splunk search) skills.     Thank you,  
I am using a scheduled report to save data to a summary index with the following query: index=_internal | stats count by status  | collect index=test_index addtime=true testmode=true marker="sch_rpt... See more...
I am using a scheduled report to save data to a summary index with the following query: index=_internal | stats count by status  | collect index=test_index addtime=true testmode=true marker="sch_rpt_name=Test_Report"  It outputs a _raw value like this : 01/12/2022 20:00:00 +0000, info_min_time=1642017600.000, info_max_time=1642106259.000, info_search_time=1642106259.959, count=63985, status=200, scheduled_report_test=Test_report Is there a way to get rid of the info_search_time field?
Hello, I know ....SPLUNK needs to have UTF-8 data format to ingest data into SPLUNK. But I have some XML files with UTF-16 format. Are there any ways we can ingest UTF-16 formatted files? Any help w... See more...
Hello, I know ....SPLUNK needs to have UTF-8 data format to ingest data into SPLUNK. But I have some XML files with UTF-16 format. Are there any ways we can ingest UTF-16 formatted files? Any help will be appreciated ...thank you so much.
I have a distributed Splunk environment, meaning a SHC and IDX cluster connected via distributed search as outlined in the Splunk docs. I have a Splunk Cloud free trial, and was wanting to try out fe... See more...
I have a distributed Splunk environment, meaning a SHC and IDX cluster connected via distributed search as outlined in the Splunk docs. I have a Splunk Cloud free trial, and was wanting to try out federated search to link the on-prem indexers to the cloud SH. However, I cannot get it to work. Has anyone accomplished this before? How the docs outline it to be is that you place the federated search provider pointing at a SH rather than a IDX, and is there ports that need to be opened on the Cloud side?
Hi ,  I have a list of allowed IP addresses and want to use splunk to find any windows login from a source Ip other than the one I have on my list . Can you help me write a query please ? Thank yo... See more...
Hi ,  I have a list of allowed IP addresses and want to use splunk to find any windows login from a source Ip other than the one I have on my list . Can you help me write a query please ? Thank you  The events I get in splunk are security application and system 
Hello, I'm new to Splunk and I'm looking for some advice. My search, e.g.     <mysearch> | table attributes     returns a value in the following format: name[test 1]srcintf[int1]dstintf[int2... See more...
Hello, I'm new to Splunk and I'm looking for some advice. My search, e.g.     <mysearch> | table attributes     returns a value in the following format: name[test 1]srcintf[int1]dstintf[int2]srcaddr[address1]dstaddr[dest1 dest2]service[svc1 svc2 svc3]comments[test comment here] I would like to split the output into individual fields. The values are within square brackets, i.e. name test 1 srcintf int1 dstintf int2 ...     Many thanks!
Where can I find User Instructions for searching for a block of hashes on a regular basis, and emailing an alert if any one of them are detected?
Hi ,  I am trying to figure out how to write a query to create an alert that will alert me whenever a user is logged on to the machine more than 12 hours . Can you please help me figure this out . ... See more...
Hi ,  I am trying to figure out how to write a query to create an alert that will alert me whenever a user is logged on to the machine more than 12 hours . Can you please help me figure this out . Thank you 
We are adding zscaler proxy to be used by Splunk TA o365.  Our security group is providing a Root CA 4 pem file for us to use.  Our Splunk environment runs on RHEL and our enterprise is Splunk v8.2.... See more...
We are adding zscaler proxy to be used by Splunk TA o365.  Our security group is providing a Root CA 4 pem file for us to use.  Our Splunk environment runs on RHEL and our enterprise is Splunk v8.2.1. The splunk user (configured .bashrc) has http and https proxy environment variables  set to the correct entries.   In addition, we have this variable defined: export REQUESTS_CA_BUNDLE=$SPLUNK_HOME/etc/auth/our_pem.pem When splunk starts up we see this error and it fails to retrieve any events from the remote site. Error is: requests.exceptions.SSLError: HTTPSConnectionPool(host='login.microsoftonline.com', port=443): Max retries exceeded with url: /url-path-made-up/oauth2/token (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1106)'))) 2022-01-13 15:36:41,891 level=INFO pid=25373 tid=MainThread logger=splunksdc.collector pos=collector.py:run:254 | | message="Modular input exited."   Talking to our security team they are wondering where the o365 TA is looking for certificates. Any help to get us past this error? Also, reading some older Answers it appear SSL Verification is turned off by default.  This is very important to us because we have more Splunk TAs that will need to talk to zscaler proxy.  Thank you
Waiting for web server at https://127.0.0.1:8000 to be available......................................... it has stopped here. No error in splunkd.log  
Dear all,   I'm trying to ingest dat from FireEye HX instance into Splunk but I cannot find the correct way for this FireEye specific instance. Does anyone have the same exact integration or no? i... See more...
Dear all,   I'm trying to ingest dat from FireEye HX instance into Splunk but I cannot find the correct way for this FireEye specific instance. Does anyone have the same exact integration or no? if yes please post the reference in the answers section.   Thanks in advance.
Hello, due to a Windows systems with wrong system/date (date was set in 2034) the _internal index in my Splunk environment has this situation There's a way to remove the future events from this... See more...
Hello, due to a Windows systems with wrong system/date (date was set in 2034) the _internal index in my Splunk environment has this situation There's a way to remove the future events from this index?   Thanks a lot  
Hello, in my deploy server, that act as a LM,  i cannot see Licese Usage Report for 30-day period. It always shows "No results found". For the "today" report i can see data. Looking at ../var/splu... See more...
Hello, in my deploy server, that act as a LM,  i cannot see Licese Usage Report for 30-day period. It always shows "No results found". For the "today" report i can see data. Looking at ../var/splunk folder i can see the license_usage.log file but there's no type=Rollover_Summary inside. There's only type=Usage. Could you help me check this issue? Thanks a log
Hi, Could you help me why the values for the Y-Axis is not being set correctly? I specified 6000 with interval of 500 but I am getting 5446 as attached. I also want to know how I can update the... See more...
Hi, Could you help me why the values for the Y-Axis is not being set correctly? I specified 6000 with interval of 500 but I am getting 5446 as attached. I also want to know how I can update the X-axis to display the data per week instead of per Month. I tried using span by I am not getting a good results. I am using the following: index=xxxxx sourcetype=xxxx EXPRSSN=IBM4D* | eval DATE=strftime(strptime(DATE,"%d%b%Y"),"%Y-%m-%d") | table EXPRSSN DATE MIPS | eval _time=strptime(DATE." "."00:00:00","%Y-%m-%d %H:%M:%S") | chart list(MIPS) over _time by EXPRSSN
Hello, Thank you in advance for your help. I have a query that returns results containing a list of names and another query that also returns a list containing names. I would like to make a report... See more...
Hello, Thank you in advance for your help. I have a query that returns results containing a list of names and another query that also returns a list containing names. I would like to make a report indicating the names present in the first query but not present in the second query (delta of both queries)