All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have an index with roughly 1.6 million records and want to compare the roughly 370'000 entries in the table with username/password against a mirai list. My index is searched from this point: in... See more...
I have an index with roughly 1.6 million records and want to compare the roughly 370'000 entries in the table with username/password against a mirai list. My index is searched from this point: index="myindex" | rex "message=\"(?<message>{.+})\" +path=" | eval message = replace(message, ".\"", "\"") | spath input=message This basically parses the JSON WebHooks in the index into fields.  Two incoming fields are interesting, Username and Password. What I want to do is relate these into a lookup table I have loaded which has Username and Password colums too (mirai-passwords.csv).  The ideal situation would be to have a count of each match (on Username/Password combo) plus have a count of matches that I relate against all of the Username and Passwords in the index (thus showing a percentage of the hits against the total volume) I thought this should work but it returns nothing. index="myindex" | rex "message=\"(?<message>{.+})\" +path=" | eval message = replace(message, ".\"", "\"") | spath input=message | lookup mirai-passwords.csv Username OUTPUT Password | stats count by Username, Password | eval TotalCount=mvindex(split(PasswordList,"|"),1) | eval PercentageCount=count/TotalCount*100 Anyone able to shed some light to help me?   Thanks
Please use below screenshot to determine what Splunk query that is needed to display the access control under the panel: "Year Selection and Rating Results" For example, when you click on "AC-7" that... See more...
Please use below screenshot to determine what Splunk query that is needed to display the access control under the panel: "Year Selection and Rating Results" For example, when you click on "AC-7" that is yellow color, you should see in column fields the following which is: System, FISMA-ID, FIPS199-Categorization, FIPS199-Rating, Control Library, YearOA(if the system is high, medium or low) and Compliance status(if the system compliance is high, medium or low)      
I'm trying to get the top products used by customers.
Hello all, 1) does Splunk allow us to have an image INSTEAD of a text when doing a mouseover tooltip in single value panels? 2) is it possible to do mouseover tooltip dynamically instead of a sta... See more...
Hello all, 1) does Splunk allow us to have an image INSTEAD of a text when doing a mouseover tooltip in single value panels? 2) is it possible to do mouseover tooltip dynamically instead of a static text? thanks in advance!
Hello Friends, In a sourcetype , data are coming in from multiple hosts and host are residing in diff-2 time zones. In raw logs we can see time zone is also mentioned, I want to write a generic T... See more...
Hello Friends, In a sourcetype , data are coming in from multiple hosts and host are residing in diff-2 time zones. In raw logs we can see time zone is also mentioned, I want to write a generic TIME_FORMAT for this. time stamps example : Mar 7 09:18:00 SGT: Mar 6 19:07:42 UTC: Mar 7 01:31:58.460 WST: Mar 7 09:13:17.384:   I tried like TIME_FORMAT= %b %d %H:%M:%S.%Q %Z , WHICH IS NOT WORKING. %Z is not able to recognize time zone here , please help me with some other expression.   Thanks in advance Happy Splunking !  !
Hi Team, I have installed helm chart version 1.5.2 for SCK. After Installation , I found that few pods are getting crashloopbackoff with below error logs and the pods which are showing as running st... See more...
Hi Team, I have installed helm chart version 1.5.2 for SCK. After Installation , I found that few pods are getting crashloopbackoff with below error logs and the pods which are showing as running status does not show logs in splunk and in splunk getting this logs for these runnning pods. Crashloopbackoff Error logs ``` kubectl logs -n splunk-sck -f lv-splunk-logging-76l6d 2023-03-07 13:56:38 +0000 [info]: init supervisor logger path=nil rotate_age=nil rotate_size=nil 2023-03-07 13:56:38 +0000 [info]: parsing config file is succeeded path="/fluentd/etc/fluent.conf" 2023-03-07 13:56:38 +0000 [info]: gem 'fluentd' version '1.15.3' 2023-03-07 13:56:38 +0000 [info]: gem 'fluent-plugin-concat' version '2.4.0' 2023-03-07 13:56:38 +0000 [info]: gem 'fluent-plugin-jq' version '0.5.1' 2023-03-07 13:56:38 +0000 [info]: gem 'fluent-plugin-kubernetes_metadata_filter' version '3.1.0' 2023-03-07 13:56:38 +0000 [info]: gem 'fluent-plugin-prometheus' version '2.0.2' 2023-03-07 13:56:38 +0000 [info]: gem 'fluent-plugin-record-modifier' version '2.1.0' 2023-03-07 13:56:38 +0000 [info]: gem 'fluent-plugin-splunk-hec' version '1.3.1' 2023-03-07 13:56:38 +0000 [info]: gem 'fluent-plugin-systemd' version '1.0.2' 2023-03-07 13:56:38 +0000 [INFO]: Reading bearer token from /var/run/secrets/kubernetes.io/serviceaccount/token 2023-03-07 13:56:41 +0000 [error]: config error file="/fluentd/etc/fluent.conf" error_class=Fluent::ConfigError error="Invalid Kubernetes API v1 endpoint https://10.96.0.1:443/api: Timed out connecting to server" ``` splunk logs   Demonset.yaml ``` kubectl get ds -n splunk-sck lv-splunk-logging -o yaml apiVersion: apps/v1 kind: DaemonSet metadata: annotations: deprecated.daemonset.template.generation: "1" meta.helm.sh/release-name: lv-splunk-connect meta.helm.sh/release-namespace: splunk-sck creationTimestamp: "2023-03-07T13:40:11Z" generation: 1 labels: app: splunk-kubernetes-logging app.kubernetes.io/managed-by: Helm chart: splunk-kubernetes-logging-1.5.2 engine: fluentd heritage: Helm release: lv-splunk-connect name: lv-splunk-logging namespace: splunk-sck resourceVersion: "390920101" selfLink: /apis/apps/v1/namespaces/splunk-sck/daemonsets/lv-splunk-logging uid: ed892500-8054-49c5-bc75-da098dbce325 spec: revisionHistoryLimit: 10 selector: matchLabels: app: splunk-kubernetes-logging release: lv-splunk-connect template: metadata: annotations: checksum/config: 6401fdcfd0a7ddd7c71e0b459aa342ebc61ed26afe237a64101f8369da6007a0 prometheus.io/port: "24231" prometheus.io/scrape: "true" creationTimestamp: null labels: app: splunk-kubernetes-logging release: lv-splunk-connect spec: containers: - env: - name: K8S_NODE_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: spec.nodeName - name: MY_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace - name: MY_POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: SPLUNK_HEC_TOKEN valueFrom: secretKeyRef: key: splunk_hec_token name: splunk-kubernetes-logging image: docker.io/splunk/fluentd-hec:1.3.1 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /api/plugins.json port: 24220 scheme: HTTP initialDelaySeconds: 60 periodSeconds: 60 successThreshold: 1 timeoutSeconds: 1 name: splunk-fluentd-k8s-logs ports: - containerPort: 24231 name: metrics protocol: TCP - containerPort: 24220 name: monitor-agent protocol: TCP resources: requests: cpu: 100m memory: 200Mi securityContext: privileged: false runAsUser: 0 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/log name: varlog - mountPath: /var/log/pods name: varlogdest readOnly: true - mountPath: /var/log/journal name: journallogpath readOnly: true - mountPath: /fluentd/etc name: conf-configmap - mountPath: /fluentd/etc/splunk name: secrets readOnly: true dnsPolicy: ClusterFirst nodeSelector: beta.kubernetes.io/os: linux restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: lv-splunk-logging serviceAccountName: lv-splunk-logging terminationGracePeriodSeconds: 30 tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master volumes: - hostPath: path: /var/log type: "" name: varlog - hostPath: path: /var/log/pods type: "" name: varlogdest - hostPath: path: /var/log/journal type: "" name: journallogpath - configMap: defaultMode: 420 name: lv-splunk-logging name: conf-configmap - name: secrets secret: defaultMode: 420 secretName: splunk-kubernetes-logging updateStrategy: rollingUpdate: maxUnavailable: 1 type: RollingUpdate status: currentNumberScheduled: 53 desiredNumberScheduled: 53 numberAvailable: 50 numberMisscheduled: 0 numberReady: 50 numberUnavailable: 3 observedGeneration: 1 updatedNumberScheduled: 53 ```
I have created a Report with a Query that updates a list of NAMES on CSV file. If the NAMES field have empty strings or null values, the Query will try to get the NAME from another field and add it ... See more...
I have created a Report with a Query that updates a list of NAMES on CSV file. If the NAMES field have empty strings or null values, the Query will try to get the NAME from another field and add it to NAMES. Something like this: NAMES ADDED_ON_INDEX REPORT_UPDATE_DATE Sara 01/03/2023 00:00:00 06/03/2023 17:28:17 John 01/02/2023 00:00:00 06/03/2023 17:28:17 Peter 01/01/2023 00:00:00 06/03/2023 17:28:17       Oliver 01/03/2023 00:00:00 06/03/2023 17:28:17   I want to achieve the following: NAMES ADDED_ON_INDEX REPORT_UPDATE_DATE Sara 01/03/2023 00:00:00 06/03/2023 17:28:17 John 01/02/2023 00:00:00 06/03/2023 17:28:17 Peter 01/01/2023 00:00:00 06/03/2023 17:28:17 Matt 22/01/2023 00:00:00 07/03/2023 18:33:09 Oliver 01/03/2023 00:00:00 06/03/2023 17:28:17   I want the Report to register the date ONLY when new values date and to NOT replace current dates, so I can keep track of when the NAMES were added by the Report. I tried the following line but it doesn't do what I want. It always replace with the time the Report ran: | eval Report_Update = strftime(now(),"%d/%m/%Y %H:%M:%S") And "_time" gives me the date of when it was added to the index. Is there a specific way to register this info?
How can we log records being viewed by custom web app users to Splunk?  We need to log web app data usage info such as what user took what action on what record at what time.  We have been told to ha... See more...
How can we log records being viewed by custom web app users to Splunk?  We need to log web app data usage info such as what user took what action on what record at what time.  We have been told to have our web app code write entries to the Windows Event Viewer which we can easily do but we don't want to write to an existing Application log and muddy up the information logged there.  There is the idea of creating a custom Event Viewer log but that requires a registry change on all machines where we would need to do this and we don't directly have those permissions.  And any new servers being set up would need to have this change made also.  Seems like a hassle to maintain.  Is there a better way to write custom usage data to Splunk?
We are using HCL BigFix and HCL Insights as a data warehouse.  There have been times when the import of data from HCL BigFix to HCL Insights has partially failed with no indication a failure has occu... See more...
We are using HCL BigFix and HCL Insights as a data warehouse.  There have been times when the import of data from HCL BigFix to HCL Insights has partially failed with no indication a failure has occurred.  We would like to verify the HCL Insights data imported into Splunk against the HCL BigFix databases.  Is there a way to run SPL that checks what's in Splunk against an external MS SQL database? I know how to create a db connector and setup a read only account.  But I don't want to import data from the database, just verify the data already in Splunk.     index=patch sourcetype="ibm:bigfix:Patch" | table BigFixDatabasePathTxt ComputerDNSNm ComputerId FixletId FixletIsRelevantInd FixletLastBecameRelevantDtm | join type=inner ComputerId [ | dbxquery query="select BigFixDatabasePathTxt ComputerDNSNm ComputerId FixletId FixletIsRelevantInd FixletLastBecameRelevantDtm from patch where {put SPL output here?}] We'd like the output to only show unmatched data.  
Hello, I am receiving cloud data from AWS via HEC in JSON format but I am having trouble getting the "timestamp" field to index properly.  Here is a simplified sample JSON: {   metric_name: UnHeal... See more...
Hello, I am receiving cloud data from AWS via HEC in JSON format but I am having trouble getting the "timestamp" field to index properly.  Here is a simplified sample JSON: {   metric_name: UnHealthyHostCount   namespace: AWS/ApplicationELB   timestamp: 1678204380000 } In order to index I created the following sourcetype which has been replicated to HF, IDX cluster, and SH: [aws:sourcetype] SHOULD_LINEMERGE = false TRUNCATE = 8388608 TIME_PREFIX = \"timestamp\"\s*\:\s*\" TIME_FORMAT = %s%3N TZ = UTC MAX_TIMESTAMP_LOOKAHEAD = 40 KV_MODE = json The event data gets indexed without issue, but I noticed that the "timestamp" field seems to be indexed as a multivalue containing the epoch as above, but also the value "none". I thought it had to do with indexed extractions, but it is the only field that displays this behaviour. Here is the table: Any ideas on how to get the data indexed without the "none" value? Thank you and best regards, Andrew  
Hello, I would like all the values from my query to be selected by default in my multiselect button. As the result of my query is not static I can´t use <default>. Any help is greatly appreciat... See more...
Hello, I would like all the values from my query to be selected by default in my multiselect button. As the result of my query is not static I can´t use <default>. Any help is greatly appreciated, thanks.
Hi! i have a report for users login in from different countries in the last 24 hours: index="accesslogs" sourcetype=apilogs authIP=* | iplocation authIP | stats count(authIP) AS ipCount by authDato... See more...
Hi! i have a report for users login in from different countries in the last 24 hours: index="accesslogs" sourcetype=apilogs authIP=* | iplocation authIP | stats count(authIP) AS ipCount by authDato, authIP, _time, Country, City, | where ipCount>=1 | eval _time=strftime(_time, "%Y-%m-%d %H:%M:%S") | table authDato, Country, City, authIP, _time | dedup authIP | eventstats dc(Country) as COUNT by authDato | where COUNT > 1  The results has this format: authdato | Country | City | authIP | _time  246423 | Paraguay | Asuncion | xxx.xxx.xxx.xxx | 2023-03-07  12:10:06 246423| Brazil | Sao Paulo | xxx.xxx.xxx.xxx | 2023-03-07  10:10:34 246423 | Argentina | Caseros | xxx.xxx.xxx.xxx | 2023-03-06  10:10:34 1004629 | Paraguay | Asuncion | xxx.xxx.xxx.xxx | 2023-03-07  10:05:34 1004629 | Argentina | Tucuman | xxx.xxx.xxx.xxx | 2023-03-06  16:34:06 1422262 | Paraguay | Asuncion | xxx.xxx.xxx.xxx | 2023-03-07  12:42:32 1422262 | Brazil | Uberlandia | xxx.xxx.xxx.xxx | 2023-03-07  09:46:32 the goal is to detect compromised accounts (user A cant connect on the same day from different countries). This report is sorted by authDato (its our username).  I need to sort it by _time (newest event first), but i need the report still grouped by authdato: Like: 1422262 | Paraguay | Asuncion | xxx.xxx.xxx.xxx | 2023-03-07  12:42:32 1422262 | Brazil | Uberlandia | xxx.xxx.xxx.xxx | 2023-03-07  09:46:32 246423 | Paraguay | Asuncion | xxx.xxx.xxx.xxx | 2023-03-07  12:10:06 246423| Brazil | Sao Paulo | xxx.xxx.xxx.xxx | 2023-03-07  10:10:34 246423 | Argentina | Caseros | xxx.xxx.xxx.xxx | 2023-03-06  10:10:34 1004629 | Paraguay | Asuncion | xxx.xxx.xxx.xxx | 2023-03-07  10:05:34 1004629 | Argentina | Tucuman | xxx.xxx.xxx.xxx | 2023-03-06  16:34:06      
Hi , We are planning to  migrate Splunk from prim to Azure Cloud . The on prim is a distributed environment with 1 SH/DS, 1 IDX, 2 HFs & 2 UFs. We are thinking of leaving HFs and UFs on prim and ... See more...
Hi , We are planning to  migrate Splunk from prim to Azure Cloud . The on prim is a distributed environment with 1 SH/DS, 1 IDX, 2 HFs & 2 UFs. We are thinking of leaving HFs and UFs on prim and move SH & IDX to Azure cloud(only to use Azure Cloud VMs).  Any issues with this approach Vs moving all splunk components to Azure Could.   Thank you in advance for your su  
Hello everyone Is there a way to determine what occupies disk storage? The following SPL yields a line graph that shows disk utilization per host and each of its drives.       index=winpe... See more...
Hello everyone Is there a way to determine what occupies disk storage? The following SPL yields a line graph that shows disk utilization per host and each of its drives.       index=winperf_prod sourcetype=*Perfmon* storage_free_percent="*" | eval storage_used_percent=round(100-storage_free_percent,2) | eval host_dev=printf("%s:%s\\",host,instance) | timechart max(storage_used_percent) by host_dev        Now I want to determine what the drop in one of the drives cause on February 12. Ideally I get a list of all processes/files/etc. that occupy disk storage.   Help is appreciated
Hi team, I have uploaded the log file in Splunk via the upload option from settings. How to delete the uploaded log file from Splunk. Note I- am not looking at hiding the data, I want to remove t... See more...
Hi team, I have uploaded the log file in Splunk via the upload option from settings. How to delete the uploaded log file from Splunk. Note I- am not looking at hiding the data, I want to remove the entire local file Please advise
I just moved to a new PC.  When I try to launch Splunk I get an error  message "Your license is expired.  Please login as an administrator to update the license."  How do I login as an administrator?... See more...
I just moved to a new PC.  When I try to launch Splunk I get an error  message "Your license is expired.  Please login as an administrator to update the license."  How do I login as an administrator? I reinstalled Splunk, but I'm still having this issue.  
Hi, Does anyone have a script to create Action Suppression. Thanks
 when we select forgot password, reset password e-mail does not come to our e-mail address. but when we try to forget the username, mail comes. Can you help with the problem that the reset password d... See more...
 when we select forgot password, reset password e-mail does not come to our e-mail address. but when we try to forget the username, mail comes. Can you help with the problem that the reset password does not come?  
Hello Splunkers!! As per the below mentioned code, I want to change the font size of the text which is created through eval ( | eval text= "The performance is determined by the number of completed ... See more...
Hello Splunkers!! As per the below mentioned code, I want to change the font size of the text which is created through eval ( | eval text= "The performance is determined by the number of completed orders divided by the time there is an active order. This is compared to the system capacity of ".SystemCapacity/2 ." dual cycles per hour.") . Please guide me how I can change the size of the font? ====================================================== <single> <search> <query>| makeresults | eval SystemCapacity=$HighbayCapacity$*2 | eval text= "The performance is determined by the number of completed orders divided by the time there is an active order. This is compared to the system capacity of ".SystemCapacity/2 ." dual cycles per hour." | fields text</query> <earliest>$time_input.earliest$</earliest> <latest>$time_input.latest$</latest> </search> <option name="drilldown">none</option> <option name="rangeColors">["0x53a051","0x0877a6","0xf8be34","0xf1813f","0xdc4e41"]</option> <option name="refresh.display">progressbar</option> <option name="trellis.enabled">0</option> <option name="underLabel">Note that due to this, the performance might increase when a higher workload is present.</option> </single>  
date Scope 12/11/2020 Linux Shadow 17/02/2023 Linux Project 20/02/2023 Linux Project 21/02/2023 Linux Project 22/02/2023 Linux Project 23/02/2023 Lin... See more...
date Scope 12/11/2020 Linux Shadow 17/02/2023 Linux Project 20/02/2023 Linux Project 21/02/2023 Linux Project 22/02/2023 Linux Project 23/02/2023 Linux Project 24/02/2023 Linux Project 27/02/2023 Linux Project 28/02/2023 Linux Project 01/03/2023 Linux Project 01/03/2023 Linux Project 01/03/2023 Linux Project 02/03/2023 Linux projet 03/03/2023 Linux Project 03/03/2023 Linux Project 06/03/2023 Linux Project 06/03/2023 Linux Project we need to extract the lastest scope with respect to latest date,  The latest date is 06/03/2023, so its scope is linux project, we need to get this value and the result will be date Scope 01/03/2023 02/03/2023 03/03/2023 06/03/2023 12/11/2020 17/02/2023 20/02/2023 21/02/2023 22/02/2023 23/02/2023 24/02/2023 27/02/2023 28/02/2023 Linux Project