All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

good day I am currently using Splunk Cloud Version: 9.0.2208.4, when using the Carousel Viz visualization it works normally on any dashboard, but when I add the script="simple_xml_examples:tokenlin... See more...
good day I am currently using Splunk Cloud Version: 9.0.2208.4, when using the Carousel Viz visualization it works normally on any dashboard, but when I add the script="simple_xml_examples:tokenlinks.js" reference to use dashboards via token I get an error "Error rendering Carousel Viz visualization", has anyone else experienced this? previously used it in splunk enterprise without any problem Regards
Hi All, I am planning to implement SmartStore in my current splunk environment and wanted to know if i am only doing it for few indexes in my cluster do i still have to change and make the SF=RF in... See more...
Hi All, I am planning to implement SmartStore in my current splunk environment and wanted to know if i am only doing it for few indexes in my cluster do i still have to change and make the SF=RF in my cluster master. Also if you could list the best practices that worked for you while implementing it ,that would be helpful as well.   Thanks, Chiranjeev Singh
Dear all, We are on process of ingesting Check Point EDR logs in our Splunk Cloud Platform. This should be done through a Heavy Forwarder. Checkpoint sends encrypted data to HFW. For that purpose... See more...
Dear all, We are on process of ingesting Check Point EDR logs in our Splunk Cloud Platform. This should be done through a Heavy Forwarder. Checkpoint sends encrypted data to HFW. For that purpose, we used the following guide provided by CheckPoint for generating and configure the certificates which contains specific instructions for Splunk: https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails=&solutionid=sk122323 As summary, there are two certificates that needs to be configured on the Splunk side:  - splunk.pem It is a combination of SyslogServer.crt + SyslogServer.key + RootCa.pem configured in /opt/splunk/etc/apps/CheckPointAPP/local/inputs.conf - ca.pem configured in /opt/splunk/etc/system/local/server.conf This configuration is not working because the certificate splunk.pem is giving a handshake error "SSL alert number 40". The following setting in server.conf as the CheckPoint guide specifies, returns an error in Splunk: "Invalid key in stanza [SSL]".     [SSL] cipherSuite = TLSv1+HIGH:TLSv1.2+HIGH:@STRENGTH   We also have tried with this configuration with the same result:     [SSL] cipherSuite = ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:AES256-SHA:DHE-RSA-AES128-SHA:AES128-SHA:AES256-SHA:AES128-SHA     In Splunk internal wereceive the following error:   Received fatal SSL3 alert. ssl_state='SSLv3 read client certificate A', alert_description='unknown CA'.   Do you know which could be the point of failure? Why the certificate is returning an error 40 or if the configuration should be set in a different way? Best regards
Does everyone have same issue   I have configure add-on linux on forwarder, and import search found the incoming status   but no entity found   also any indexed data in itsi_im_... See more...
Does everyone have same issue   I have configure add-on linux on forwarder, and import search found the incoming status   but no entity found   also any indexed data in itsi_im_metrics, but why still (0)  
Please help with the query on how to compare CSV data with Splunk event and get those data in result which is not available in csv. Thanks
Looking to use the Github supplied python script @ https://github.com/imperva/incapsula-logs-downloader to fetch Incasula logs into Splunk. This requires python libraries not already in the Splunk 3.... See more...
Looking to use the Github supplied python script @ https://github.com/imperva/incapsula-logs-downloader to fetch Incasula logs into Splunk. This requires python libraries not already in the Splunk 3.7 libraries. This is on Windows 2016.  Having broken one Splunk install by installing a standalone version of Python, I would prefer to use the inbuilt python 3.7 version that comes with Splunk.  How do I import the modules to Splunk. Or is there documented best practise for installing Python outside Splunk? 
Hello everyone, I am currently making a dashboard using the free trial enterprise version of splunk. I have been trying to format my markdown text, unfortunately I have not been able to modify the... See more...
Hello everyone, I am currently making a dashboard using the free trial enterprise version of splunk. I have been trying to format my markdown text, unfortunately I have not been able to modify the size of it. However, the size selection feature in dashboard studio was added in 9.0.2208 but I cannot find it there. I tried several things in the code section but nothing works so far. Did I make something wrong, or is the feature simply not available on the free trial version ?  Thanks
Hello Splunk Community! In MLTK version 5.3.1, the streaming_apply feature was removed due to bundle replication performance issues. However, I am currently facing a problem where executing a cont... See more...
Hello Splunk Community! In MLTK version 5.3.1, the streaming_apply feature was removed due to bundle replication performance issues. However, I am currently facing a problem where executing a continuously updated model in a distributed fashion across all available search peers in our Splunk Enterprise setup would be highly beneficial. As information on this former functionality appears sparse, I wanted to inquire regarding best way to handle automatically replicating the trained model to the search peers and executing it there, if it is at all still possible. A previous question asked here (How to export/import/share ML models between Splunk instances and external system? ) hinted at manually copying the model files into the target lookup folder as an alternative to using streaming_apply. With daily updates to the model, this is sadly not an option in our deployment. Thanks for your help! Best regards LJ
Hi, I am ingesting a json data that has a property called "name". I could not figure out why I am not able to extract the "name" property as a field name. It also does not create the "name" field ev... See more...
Hi, I am ingesting a json data that has a property called "name". I could not figure out why I am not able to extract the "name" property as a field name. It also does not create the "name" field even if I use a filed extractor. I am how ever able to create a field called "name1" but if I change it back to "name" it again does not register the field. Regards, Edward
Hi, How to get the cycognito logs to splunk, is there any app available in splunkbase, let me know  thanks...
I'm new in Splunk and have a test environment contains search head cluster with three Splunk 9.0.1 instances: one deployer and two search heads. If it important a Deployer also have an indexer cluste... See more...
I'm new in Splunk and have a test environment contains search head cluster with three Splunk 9.0.1 instances: one deployer and two search heads. If it important a Deployer also have an indexer cluster master role. This is a fresh install without any specific changes.  Output of splunk show shcluster-status --verbose:   Captain: decommission_search_jobs_wait_secs : 180 dynamic_captain : 1 elected_captain : Tue Jan 24 17:57:01 2023 id : 17B17CF3-57A4-4F34-A943-835219C2DA41 initialized_flag : 1 kvstore_maintenance_status : disabled label : spl-sh02 max_failures_to_keep_majority : 0 mgmt_uri : https://spl-sh02.domain.com:8089 min_peers_joined_flag : 1 rolling_restart : restart rolling_restart_flag : 0 rolling_upgrade_flag : 0 service_ready_flag : 1 stable_captain : 1 Cluster Manager(s): https://spl-ms01.domain.com:8089 splunk_version: 9.0.0.1 Members: spl-sh02 kvstore_status : ready label : spl-sh02 manual_detention : off mgmt_uri : https://domain.com:8089 mgmt_uri_alias : https://172.28.56.104:8089 out_of_sync_node : 0 preferred_captain : 1 restart_required : 0 splunk_version : 9.0.0.1 status : Up spl-sh01 kvstore_status : ready label : spl-sh01 last_conf_replication : Wed Jan 25 10:52:26 2023 manual_detention : off mgmt_uri : https://spl-sh01.domain.com:8089 mgmt_uri_alias : https://172.28.56.100:8089 out_of_sync_node : 0 preferred_captain : 1 restart_required : 0 splunk_version : 9.0.0.1 status : Up   When i'm try to execute "apply shcluster-bundle" on deployer i'm see this error:   Warning: Depending on the configuration changes being pushed, this command might initiate a rolling restart of the cluster members. Please refer to the documentation for the details. Do you wish to continue? [y/n]: y WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details. Error in pre-deploy check, uri=https://spl-sh02.domain.com:8089/services/shcluster/captain/kvstore-upgrade/status, status=401, error=No error   How i can solve this problem? 
Hello Splunkers, I the following error on my Splunk HF which is listening to incoming data from F5 network appliance.   01-25-2023 08:06:56.794 +0000 ERROR TcpInputProc [2612981 FwdDataReceiverThr... See more...
Hello Splunkers, I the following error on my Splunk HF which is listening to incoming data from F5 network appliance.   01-25-2023 08:06:56.794 +0000 ERROR TcpInputProc [2612981 FwdDataReceiverThread] - Error encountered for connection from src=<internal_ip_f5>:59697. Read Timeout Timed out after 600 seconds.   I am wondering what the number after the F5 IP is... I specified a unique port for the forwarding of data between f5 and the HF so I do not understand why I have number like 59697 (and many others). More generally I do not know how to troubleshoot this... Thanks for your help, GaetanVP
I have following splunk query (index=index_1 OR index=index_2) sourcetype=openshift_logs openshift_namespace="my_ns" openshift_cluster="*" | spath "message.url" | search "message.url"="/dummy/url/v... See more...
I have following splunk query (index=index_1 OR index=index_2) sourcetype=openshift_logs openshift_namespace="my_ns" openshift_cluster="*" | spath "message.url" | search "message.url"="/dummy/url/v1*" | search "message.tracers.ke-channel{}"="*" |search "message.jsonObject.payments{}.products{}.type"=GROCERY | dedup message.tracers.ke-correlation-id{} | search "message.statusCode"<400 |rename "message.jsonObject.payments{}.orderStatus.status" AS "ORDER_STATUS"| top limit=50 "ORDER_STATUS" which gives the below output ORDER_STATUS count percent ----------------------------------- PAYMENT_ACCEPTED 500 70 PAYMENT_PENDING 100 20 PAYMENT_UNDER_REVIEW 90 2 PAYMENT_REDIRECTION 40 1.32 PAYMENT_NOT_ATTEMPTED10 3.11 I want to display another item in the dashbaord which should be the sum of the count of following order status: PAYMENT_ACCEPTED+PAYMENT_PENDING+PAYMENT_UNDER_REVIEW+PAYMENT_REDIRECTION i.e 500 + 100+90+40=730 Below is my query: (index=index_1 OR index=federated:index_2) sourcetype=openshift_logs openshift_namespace="my_ns" openshift_cluster="*" | spath "message.url" | search "message.url"="/dummy/url/v1*" | search "message.tracers.ke-channel{}"="*" |search "message.jsonObject.payments{}.products{}.type"=GROCERY | search "message.statusCode"<400 | dedup message.jsonObject.id |search ("message.jsonObject.payments{}.orderStatus.status"="PAYMENT_ACCEPTED" OR "message.jsonObject.payments{}.orderStatus.status"="PAYMENT_PENDING" OR "message.jsonObject.payments{}.orderStatus.status"="PAYMENT_UNDER_REVIEW" OR "message.jsonObject.payments{}.orderStatus.status"="PAYMENT_REDIRECTION") | stats count(message.jsonObject.id) But the sum of the count using the above query is always more than the actual total count. Appreciate if someone can let me know where am i going wrong.
Hi community.  Some searches have: index="my_index" index=my_index I want to extract a new field named user_index but cannot figure out the regex capture group that may or may not contain quotes ... See more...
Hi community.  Some searches have: index="my_index" index=my_index I want to extract a new field named user_index but cannot figure out the regex capture group that may or may not contain quotes around the index name.    
Hi Splunker, We are already onboarding Windows Event logs to Splunk, and with that now we also want to onboard windows Key Management Service logs to Splunk. Does anyone know how to onboard this t... See more...
Hi Splunker, We are already onboarding Windows Event logs to Splunk, and with that now we also want to onboard windows Key Management Service logs to Splunk. Does anyone know how to onboard this type of log into Splunk? Thanks in advance.
I have a dataset with incident numbers and their associated Jurisdiction. It is possible that a incident will be listed in multiple jurisdictions.  I don't want to dedup(incident_number) globally. ... See more...
I have a dataset with incident numbers and their associated Jurisdiction. It is possible that a incident will be listed in multiple jurisdictions.  I don't want to dedup(incident_number) globally. I need to count by jurisdiction, but the dedup or distinct count needs to be within each Jurisdiction.  Any suggestions?
Anyone have a search for Meant Time to Triage for specific urgency (high or critical)? I'm having no luck trying to manipulate the built in MTTT panel from the SOC operations dashboard to insert spec... See more...
Anyone have a search for Meant Time to Triage for specific urgency (high or critical)? I'm having no luck trying to manipulate the built in MTTT panel from the SOC operations dashboard to insert specific urgency.
We have a use case where we need to have an alert emailed if a user (under the field User) does not have an event of Activity="logged on" within the past 90 days within a specific sourcetype.   W... See more...
We have a use case where we need to have an alert emailed if a user (under the field User) does not have an event of Activity="logged on" within the past 90 days within a specific sourcetype.   We have tried index=index sourcetype=sourcetype Activity="logged on" | chart count over Activity by User limit=0 But we can't seem to be able to filter to only specify a count of 0 over the past 90 days   Any ideas or leads as to what would get us in the right direction?
Hello Experts.. Configuring the inupts.conf file I am trying to send data from the same windows log to multiple index's for separate dashboards. I think some sort precedence is blocking some of the... See more...
Hello Experts.. Configuring the inupts.conf file I am trying to send data from the same windows log to multiple index's for separate dashboards. I think some sort precedence is blocking some of the data. Here is what I was trying to accomplish. Is there a better way to get where I'm trying to go?   [WinEventLog://Application] disabled = 0 index = WINDOWS start_from = oldest [WinEventLog://System] disabled = 0 index = WINDOWS start_from = oldest [WinEventLog://Security] disabled = 0 index = WINDOWS start_from = oldest ######## Separate to send USB bus traffic ########## [WinEventLog://Security] disabled = 0 index = USB start_from = oldest whitelist = 1234,4321,5467, etc [WinEventLog:/Microsoft-Windows-DriverFrameworks-UserMode/Operational] disabled = 0 index = USB start_from = oldest interval = 1000,1001,1002,1003  
Given web access log data with following fields: _time,  http_status, src_ip, dest_ip After a bruteforce attack on a login page, where http_status of 200=success and 401=failure, how can I displa... See more...
Given web access log data with following fields: _time,  http_status, src_ip, dest_ip After a bruteforce attack on a login page, where http_status of 200=success and 401=failure, how can I display the number of failures, plus earliest(_time) and latest(_time) by src_ip I've tried using streamstats like below, but do not get what I'm looking for index=myIndex AND status=* | table _time status src_ip dest_ip | sort + _time | streamstats reset_on_change=true count earliest(_time) AS ET latest(_time) AS LT by status | convert ctime(ET) ctime(LT)