All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Splunk stopped sending email for Alerts, reports and Dashboards. Checked from IT team regarding SMTP settings under server settings option and there is no issue found from their end. Can you sugg... See more...
Splunk stopped sending email for Alerts, reports and Dashboards. Checked from IT team regarding SMTP settings under server settings option and there is no issue found from their end. Can you suggest me some solution or steps to resolve this issue.    
Hi, I am trying to add an "assign to me" link as a field or button for each alarm that would automatically assign alarms to the user when clicked.  Example: Currently need to select alarm, click e... See more...
Hi, I am trying to add an "assign to me" link as a field or button for each alarm that would automatically assign alarms to the user when clicked.  Example: Currently need to select alarm, click edit and click "assign to me" link and submit   I want to reuse that link and add it as an alarm field or button on the Incident Review page so analysts simply have to click to assign. This would drastically improve SLA times for us.   Thanks
Hi, I'm experiencing an issue where logs with EventCode=4625 from Windows systems (an account failed to log on) are not appearing in my Splunk instance. I have checked the data collection and index... See more...
Hi, I'm experiencing an issue where logs with EventCode=4625 from Windows systems (an account failed to log on) are not appearing in my Splunk instance. I have checked the data collection and indexing settings, but still can't find these logs. Has anyone else encountered a similar problem or have any suggestions on how to troubleshoot this? . Thank you!
Hi Team,  My application as follows: https://github.com/noahgift/kubernetes-hello-world-python-flask.git\ The application mentioned above is running fine along with the pods. However, the agent is ... See more...
Hi Team,  My application as follows: https://github.com/noahgift/kubernetes-hello-world-python-flask.git\ The application mentioned above is running fine along with the pods. However, the agent is unfortunately not reflecting itself, nor the Data on the controller. Initially, the pods were not working, but that issue is resolved thanks to stack overflow.  I had to make a few changes within the .yaml files. Do let me know if you require any further details for the same 
Hi,  I have data as below  | date | buyer | product | | Jun-1 | A      | P-01 | | Jun-1 | A      | P-02 | | Jun-1 | B      | P-03 | | Jun-1 | A      | P-03 | | Jun-5 | A      | P-02 | | Jun... See more...
Hi,  I have data as below  | date | buyer | product | | Jun-1 | A      | P-01 | | Jun-1 | A      | P-02 | | Jun-1 | B      | P-03 | | Jun-1 | A      | P-03 | | Jun-5 | A      | P-02 | | Jun-5 | A      | P-01 | | Jun-5 | A      | P-02 | | Jun-5 | C      | P-02 | | date  | P-01 | P-02 | P-03  | daily unique buyer | |Jun-1 |   1     |    1       | 2        |          2 | |Jun-5 |    1    |    2       | 0        |          2 | I want to stats count daily unique buyer and daily uniquer buyer by product I use  chart dc(buyer) as uuBuyer by date , product but I don't know how to do the last column I expect count the daily unique buyer Do anyone could give me a solution?  thanks.
When you print the summary of an investigation through ES it does not include notes. Is there a way to add those? Alternatively is there a way to use SPL to find those notes, artifacts, and events, t... See more...
When you print the summary of an investigation through ES it does not include notes. Is there a way to add those? Alternatively is there a way to use SPL to find those notes, artifacts, and events, to create a report from a custom dashboard?
We are using Splunk Enterprise server to send logs to be indexed. The monitor config is stored in '/opt/splunk/etc/system/local/inputs.conf'. An example monitor config is  [monitor:///var/log/audit/... See more...
We are using Splunk Enterprise server to send logs to be indexed. The monitor config is stored in '/opt/splunk/etc/system/local/inputs.conf'. An example monitor config is  [monitor:///var/log/audit/audit.log] sourcetype = linux_logs index = splunk_server disabled = false But when splunk is restarted, the logs are not being sent and indexed. On looking the splunkd.log file, there was WARN FilesystemChangeWatcher [9172 MainTailingThread] - error getting attributes of path "/var/log/aide/aide.log": Permission denied this warning. Adding splunk user to root/sudo group didn't fix the issue. Note: We are using Splunk Enterprise server to send logs to itself through input.config file mentioned in this post (https://community.splunk.com/t5/Deployment-Architecture/Configure-a-Receiver-to-Forward-to-itself/m-p/128441)
I stored a large amount of my own data (approximately 1 million pieces) in the splunkd lookup database and found that it took a long time and the server CPU and disk usage were high. Additionally, my... See more...
I stored a large amount of my own data (approximately 1 million pieces) in the splunkd lookup database and found that it took a long time and the server CPU and disk usage were high. Additionally, my multi core CPU only has a high workload of one core. What is this situation? What should I do to address my data storage and storage performance issues.
Hi,   I wana keep only logs Not containing the word "chatbot". This word is present in the _raw data I'm using the method explained in the following doc : Routeandfilterdatad  The props.con... See more...
Hi,   I wana keep only logs Not containing the word "chatbot". This word is present in the _raw data I'm using the method explained in the following doc : Routeandfilterdatad  The props.conf and transforms.conf are set on the indexers and I restarts my indexers   But logs with this word are still present. Any idea, or way to debug this point ?   props.conf     [MySourcetype] INDEXED_EXTRACTIONS = JSON TIME_PREFIX=\"timestamp\": TIME_FORMAT=%s%3N #Do not index chatbot data TRANSFORMS-null = API-NullQueue       transforms.conf     [API-NullQueue] REGEX = chatbot DEST_KEY = queue FORMAT = nullQueue       Thank's all.
Hi There,    I need to send notification to MS Teams as a part of alert action,    How i need to configure that?  Thanks in Advance! Manoj Kumar s=S
Hi, Is it possible to monitor Windows event log via WMI to splunk instead of using Universal Forwarder? if yes, how can i configure this communication.   Thanks.
Hi Splunkers, I am new to handling certificates but it seems to be the resolution that I need to resolve the unsecure browser issue when accessing my Splunk UI. We have been given a certificate in ... See more...
Hi Splunkers, I am new to handling certificates but it seems to be the resolution that I need to resolve the unsecure browser issue when accessing my Splunk UI. We have been given a certificate in .cer generated from a csr in which we listed the DNS of our Splunk instance. Now I understand that we can generate a .pem out of this .cer but I am unclear as to the private key to use based on the parameters that we need to set in web.conf [settings] enableSplunkWebSSL = true privKeyPath = <.key here> serverCert = /home/user/certs/mycacert.pem Reference: https://docs.splunk.com/Documentation/Splunk/9.0.5/Admin/Webconf Would anyone be able to provide direction on this please? Thanks a lot. Cheers!
Hello All, I need your assistance to fetch the below details about Datamodels: - 1. What is the lifecycle of Splunk datamodel? 2. How Splunk logs events in _internal index when Splunk executes eac... See more...
Hello All, I need your assistance to fetch the below details about Datamodels: - 1. What is the lifecycle of Splunk datamodel? 2. How Splunk logs events in _internal index when Splunk executes each phase of Splunk datamodel? Any information or guidance will be helpful. Thank you Taruchit
I'm trying to follow these instructions: https://github.com/signalfx/splunk-otel-collector/blob/main/docs/getting-started/linux-installer.md ...I set up a Splunk trial, and have copied a new token ... See more...
I'm trying to follow these instructions: https://github.com/signalfx/splunk-otel-collector/blob/main/docs/getting-started/linux-installer.md ...I set up a Splunk trial, and have copied a new token that I created with all read accesses.  I am using the UI here: app.us1.signalfx.com When I use the token from the UI that I got from General Settings -> Access Tokens to perform an action with the script I downloaded from the instructions, I get an error.  I run this: sh splunk-otel-collector.sh --realm us1 Here are the results: $ sh splunk-otel-collector.sh --realm us1 Please enter your Splunk access token: BHwgwl0s32kbI227gUzORw Splunk OpenTelemetry Collector Version: latest Memory Size in MIB: 512 Realm: us1 Ingest Endpoint: https://ingest.us1.signalfx.com API Endpoint: https://api.us1.signalfx.com Trace Endpoint: https://ingest.us1.signalfx.com/v2/trace HEC Endpoint: https://ingest.us1.signalfx.com/v1/log TD Agent (Fluentd) Version: 4.3.2 Your access token could not be verified. This may be due to a network connectivity issue or an invalid access token. .......any idea what could be going wrong?? Thank you!! Jeff  
Hi! What are some common causes of failures to restart the Splunk Universal Forwarder in windows? Thank you!
Hello, We have a client, need to use AWS-Add-On to collect/ingest data from their S3 Buckets. How would we proceed with AWS-Add-On installation/configuration process on our On-Prem server? Any recom... See more...
Hello, We have a client, need to use AWS-Add-On to collect/ingest data from their S3 Buckets. How would we proceed with AWS-Add-On installation/configuration process on our On-Prem server? Any recommendations will be highly appreciated. Thank you!  
Greetings community experts Search results for JSON data received via curl and Rest API from AWS are five times the actual events. Seeking help understanding why events are counted more than once. ... See more...
Greetings community experts Search results for JSON data received via curl and Rest API from AWS are five times the actual events. Seeking help understanding why events are counted more than once. Indexed using sourcetype _JSON Looking at the data Splunk reports 4 deviceConnectivityUpdate events and 1 deviceStateEvent which agrees with the data. However when I run stats count by("hits{}._source.logType") by "hits{}._source.userName I get 5x count of events.  Same is true with this search using dedup | eval DCU=mvfilter(match('hits{}._source.logType',"deviceConnectivityUpdate")) | eval DSE=mvfilter(match('hits{}._source.logType',"deviceStateEvent")) | dedup DCU DSE | stats count(DCU) count(DSE) by hits{}._source.userName The data _raw {"total":{"value":5,"relation":"eq"},"max_score":1,"hits":[{"_index":"index_44444444-4444-4444-4444-444444444444","_id":"zbIdu4gBwP_vIV4KexH0","_score":1,"_source":{"version":1,"logType":"deviceConnectivityUpdate","deviceSerialNumber":"4931390007","userName":"gary.whitlocks22","cloudTimestampUTC":"2023-06-14T18:14:11Z","isDeviceOffline":false}},{"_index":"index_44444444-4444-4444-4444-444444444444","_id":"z7Ieu4gBwP_vIV4KARGG","_score":1,"_source":{"version":1,"logType":"deviceConnectivityUpdate","deviceSerialNumber":"4931390007","userName":"gary.whitlocks22","cloudTimestampUTC":"2023-06-14T18:14:45Z","isDeviceOffline":true}},{"_index":"index_44444444-4444-4444-4444-444444444444","_id":"0LIeu4gBwP_vIV4KHxHn","_score":1,"_source":{"version":1,"logType":"deviceStateEvent","deviceSerialNumber":"4931490086","userName":"NSSS","cloudTimestampUTC":"2023-06-14T18:14:53Z","deviceTimestampUTC":"2023-06-14T18:14:55Z","batteryPercent":49,"isCheckIn":false,"isAntiSurveillanceViolation":false,"isLowBatteryViolation":false,"isCellularViolation":false,"isDseDelayed":false,"bleMacAddress":"7d:8e:1a:be:92:5a","cellIpv4Address":"0.0.0.0","cellIpv6Address":"::"}},{"_index":"index_44444444-4444-4444-4444-444444444444","_id":"zrIdu4gBwP_vIV4KsxFQ","_score":1,"_source":{"version":1,"logType":"deviceConnectivityUpdate","deviceSerialNumber":"4931390006","userName":"PennyAndroid","cloudTimestampUTC":"2023-06-14T18:14:25Z","isDeviceOffline":true}},{"_index":"index_44444444-4444-4444-4444-444444444444","_id":"0bIeu4gBwP_vIV4KhBGr","_score":1,"_source":{"version":1,"logType":"deviceConnectivityUpdate","deviceSerialNumber":"4931390006","userName":"PennyAndroid","cloudTimestampUTC":"2023-06-14T18:15:19Z","isDeviceOffline":false}}]} JSON format { "total": { "value": 5, "relation": "eq" }, "max_score": 1, "hits": [ { "_index": "index_44444444-4444-4444-4444-444444444444", "_id": "zbIdu4gBwP_vIV4KexH0", "_score": 1, "_source": { "version": 1, "logType": "deviceConnectivityUpdate", "deviceSerialNumber": "4931390007", "userName": "gary.whitlocks22", "cloudTimestampUTC": "2023-06-14T18:14:11Z", "isDeviceOffline": false } }, { "_index": "index_44444444-4444-4444-4444-444444444444", "_id": "z7Ieu4gBwP_vIV4KARGG", "_score": 1, "_source": { "version": 1, "logType": "deviceConnectivityUpdate", "deviceSerialNumber": "4931390007", "userName": "gary.whitlocks22", "cloudTimestampUTC": "2023-06-14T18:14:45Z", "isDeviceOffline": true } }, { "_index": "index_44444444-4444-4444-4444-444444444444", "_id": "0LIeu4gBwP_vIV4KHxHn", "_score": 1, "_source": { "version": 1, "logType": "deviceStateEvent", "deviceSerialNumber": "4931490086", "userName": "NSSS", "cloudTimestampUTC": "2023-06-14T18:14:53Z", "deviceTimestampUTC": "2023-06-14T18:14:55Z", "batteryPercent": 49, "isCheckIn": false, "isAntiSurveillanceViolation": false, "isLowBatteryViolation": false, "isCellularViolation": false, "isDseDelayed": false, "bleMacAddress": "7d:8e:1a:be:92:5a", "cellIpv4Address": "0.0.0.0", "cellIpv6Address": "::" } }, { "_index": "index_44444444-4444-4444-4444-444444444444", "_id": "zrIdu4gBwP_vIV4KsxFQ", "_score": 1, "_source": { "version": 1, "logType": "deviceConnectivityUpdate", "deviceSerialNumber": "4931390006", "userName": "PennyAndroid", "cloudTimestampUTC": "2023-06-14T18:14:25Z", "isDeviceOffline": true } }, { "_index": "index_44444444-4444-4444-4444-444444444444", "_id": "0bIeu4gBwP_vIV4KhBGr", "_score": 1, "_source": { "version": 1, "logType": "deviceConnectivityUpdate", "deviceSerialNumber": "4931390006", "userName": "PennyAndroid", "cloudTimestampUTC": "2023-06-14T18:15:19Z", "isDeviceOffline": false } } ] }
Tried many variations but just cant get it right.  Example Data: onetwoap321.site onethreap3ua.somesite oneforpd210.site one3ninaw1u.site The string may or may not have characters after the l... See more...
Tried many variations but just cant get it right.  Example Data: onetwoap321.site onethreap3ua.somesite oneforpd210.site one3ninaw1u.site The string may or may not have characters after the last set of numbers. There may be another number but will be seperated by at least 2 letters before the last set of numbers.  {string}{number optional}{2 letters}{number}{optional characters}{may or may not have . at end} The two letters is what I want to capture in a field called Code.  | rex field=Name "^(?<Code>[^.]+)" Thanks for any help.   
hi, might be a simple question but what is a stack id in splunk? and how do i get it?
Anyone else not able to get ahold of support? I've submitted several tickets and tried calling for a week straight and cannot get any sort of support, anyone else having this issue?