All Topics

Top

All Topics

Hey Splunk Community!   Working on a dashboard ( For Incident Response) in splunk but need some assistance initially with queries on the following in Splunk: Computer or host showing if malic... See more...
Hey Splunk Community!   Working on a dashboard ( For Incident Response) in splunk but need some assistance initially with queries on the following in Splunk: Computer or host showing if malicious Logon info for other machines that a user has logged in for the ay IP address of machine, Location or Country, Is it a VM, and Laptop Active Directory info on user Remote machine name - to find out what machine was used to remote into the Server on the last incident Need this soon, would be appreciated. Thanks Very much!
After installing Splunk using the generated Ansible playbook the service can't start. There is no error message and I cannot find any logs. How can I troubleshoot this?
I have an idea of what logs can be collected by Universal Forwarder (Example - Application, Security, System, Forwarded event logs, Performance monitoring). But I want to know what exactly it collect... See more...
I have an idea of what logs can be collected by Universal Forwarder (Example - Application, Security, System, Forwarded event logs, Performance monitoring). But I want to know what exactly it collects in all those categories.
Good Morning, I have been working on a task to gather the free disk space of servers we have Splunk Universal Forwarder on. I am down to getting data from all servers through the perfmon data. I ha... See more...
Good Morning, I have been working on a task to gather the free disk space of servers we have Splunk Universal Forwarder on. I am down to getting data from all servers through the perfmon data. I have it for all servers but two. One of these is the Splunk deployment server (we're on Splunk Cloud). I have checked all the apps which might have inputs.conf with stanzas referring to "source="Perfmon:Free Disk Space" and I've looked in /etc/system/local on the Deployment Server. All the stanzas are at 0 and I've restarted Splunk after each change, I'm at a loss! Thank you in advance. Scott
Hello The deployment-client setting is required for the remote Universal Forwarder. And then I want to restart Universal Forwarder. I know the ID/PW. Can I set it up in My Deployment-Server?  
I'm creating an alert that will search for two separate string values with the OR condition inside the search. Is there a way to setup the alert condition to fire for 'If the second event is not foun... See more...
I'm creating an alert that will search for two separate string values with the OR condition inside the search. Is there a way to setup the alert condition to fire for 'If the second event is not found within 5 minutes of the first event, fire the alert.'?  The events happen anytime within a 6 hour window, so having it search every 5 minutes for a count under 2 would fire alerts constantly.
We have an SHC cluster on enterprise Version 7.3.5 & ITSI 4.4. Recently we trigged to upgrade our ITSI from 4.4.X to 4.7.0 and it failed. It was an generic error message and support was not able to... See more...
We have an SHC cluster on enterprise Version 7.3.5 & ITSI 4.4. Recently we trigged to upgrade our ITSI from 4.4.X to 4.7.0 and it failed. It was an generic error message and support was not able to find the root cause. So we are now trying to build a new SHC in parallel (Same Version as Original one) and connect it to the same Indexer cluster. We want to make sure original cluster is working fine until we are sure that new SHC is an exact replica. 1] Is there an issue having new different SHC connected to a same index cluster? 2] How do we migrate all the data from one SHC to another including the ITSI correlation Search , Dashboards, Lookup table, Entities etc. 3] Upon Migration can we upgrade ITSI to 4.7 and above  on new SHC?
good day I am currently using Splunk Cloud Version: 9.0.2208.4, when using the Carousel Viz visualization it works normally on any dashboard, but when I add the script="simple_xml_examples:tokenlin... See more...
good day I am currently using Splunk Cloud Version: 9.0.2208.4, when using the Carousel Viz visualization it works normally on any dashboard, but when I add the script="simple_xml_examples:tokenlinks.js" reference to use dashboards via token I get an error "Error rendering Carousel Viz visualization", has anyone else experienced this? previously used it in splunk enterprise without any problem Regards
Hi All, I am planning to implement SmartStore in my current splunk environment and wanted to know if i am only doing it for few indexes in my cluster do i still have to change and make the SF=RF in... See more...
Hi All, I am planning to implement SmartStore in my current splunk environment and wanted to know if i am only doing it for few indexes in my cluster do i still have to change and make the SF=RF in my cluster master. Also if you could list the best practices that worked for you while implementing it ,that would be helpful as well.   Thanks, Chiranjeev Singh
Dear all, We are on process of ingesting Check Point EDR logs in our Splunk Cloud Platform. This should be done through a Heavy Forwarder. Checkpoint sends encrypted data to HFW. For that purpose... See more...
Dear all, We are on process of ingesting Check Point EDR logs in our Splunk Cloud Platform. This should be done through a Heavy Forwarder. Checkpoint sends encrypted data to HFW. For that purpose, we used the following guide provided by CheckPoint for generating and configure the certificates which contains specific instructions for Splunk: https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails=&solutionid=sk122323 As summary, there are two certificates that needs to be configured on the Splunk side:  - splunk.pem It is a combination of SyslogServer.crt + SyslogServer.key + RootCa.pem configured in /opt/splunk/etc/apps/CheckPointAPP/local/inputs.conf - ca.pem configured in /opt/splunk/etc/system/local/server.conf This configuration is not working because the certificate splunk.pem is giving a handshake error "SSL alert number 40". The following setting in server.conf as the CheckPoint guide specifies, returns an error in Splunk: "Invalid key in stanza [SSL]".     [SSL] cipherSuite = TLSv1+HIGH:TLSv1.2+HIGH:@STRENGTH   We also have tried with this configuration with the same result:     [SSL] cipherSuite = ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:AES256-SHA:DHE-RSA-AES128-SHA:AES128-SHA:AES256-SHA:AES128-SHA     In Splunk internal wereceive the following error:   Received fatal SSL3 alert. ssl_state='SSLv3 read client certificate A', alert_description='unknown CA'.   Do you know which could be the point of failure? Why the certificate is returning an error 40 or if the configuration should be set in a different way? Best regards
Does everyone have same issue   I have configure add-on linux on forwarder, and import search found the incoming status   but no entity found   also any indexed data in itsi_im_... See more...
Does everyone have same issue   I have configure add-on linux on forwarder, and import search found the incoming status   but no entity found   also any indexed data in itsi_im_metrics, but why still (0)  
Please help with the query on how to compare CSV data with Splunk event and get those data in result which is not available in csv. Thanks
Looking to use the Github supplied python script @ https://github.com/imperva/incapsula-logs-downloader to fetch Incasula logs into Splunk. This requires python libraries not already in the Splunk 3.... See more...
Looking to use the Github supplied python script @ https://github.com/imperva/incapsula-logs-downloader to fetch Incasula logs into Splunk. This requires python libraries not already in the Splunk 3.7 libraries. This is on Windows 2016.  Having broken one Splunk install by installing a standalone version of Python, I would prefer to use the inbuilt python 3.7 version that comes with Splunk.  How do I import the modules to Splunk. Or is there documented best practise for installing Python outside Splunk? 
Hello everyone, I am currently making a dashboard using the free trial enterprise version of splunk. I have been trying to format my markdown text, unfortunately I have not been able to modify the... See more...
Hello everyone, I am currently making a dashboard using the free trial enterprise version of splunk. I have been trying to format my markdown text, unfortunately I have not been able to modify the size of it. However, the size selection feature in dashboard studio was added in 9.0.2208 but I cannot find it there. I tried several things in the code section but nothing works so far. Did I make something wrong, or is the feature simply not available on the free trial version ?  Thanks
Hello Splunk Community! In MLTK version 5.3.1, the streaming_apply feature was removed due to bundle replication performance issues. However, I am currently facing a problem where executing a cont... See more...
Hello Splunk Community! In MLTK version 5.3.1, the streaming_apply feature was removed due to bundle replication performance issues. However, I am currently facing a problem where executing a continuously updated model in a distributed fashion across all available search peers in our Splunk Enterprise setup would be highly beneficial. As information on this former functionality appears sparse, I wanted to inquire regarding best way to handle automatically replicating the trained model to the search peers and executing it there, if it is at all still possible. A previous question asked here (How to export/import/share ML models between Splunk instances and external system? ) hinted at manually copying the model files into the target lookup folder as an alternative to using streaming_apply. With daily updates to the model, this is sadly not an option in our deployment. Thanks for your help! Best regards LJ
Hi, I am ingesting a json data that has a property called "name". I could not figure out why I am not able to extract the "name" property as a field name. It also does not create the "name" field ev... See more...
Hi, I am ingesting a json data that has a property called "name". I could not figure out why I am not able to extract the "name" property as a field name. It also does not create the "name" field even if I use a filed extractor. I am how ever able to create a field called "name1" but if I change it back to "name" it again does not register the field. Regards, Edward
Hi, How to get the cycognito logs to splunk, is there any app available in splunkbase, let me know  thanks...
I'm new in Splunk and have a test environment contains search head cluster with three Splunk 9.0.1 instances: one deployer and two search heads. If it important a Deployer also have an indexer cluste... See more...
I'm new in Splunk and have a test environment contains search head cluster with three Splunk 9.0.1 instances: one deployer and two search heads. If it important a Deployer also have an indexer cluster master role. This is a fresh install without any specific changes.  Output of splunk show shcluster-status --verbose:   Captain: decommission_search_jobs_wait_secs : 180 dynamic_captain : 1 elected_captain : Tue Jan 24 17:57:01 2023 id : 17B17CF3-57A4-4F34-A943-835219C2DA41 initialized_flag : 1 kvstore_maintenance_status : disabled label : spl-sh02 max_failures_to_keep_majority : 0 mgmt_uri : https://spl-sh02.domain.com:8089 min_peers_joined_flag : 1 rolling_restart : restart rolling_restart_flag : 0 rolling_upgrade_flag : 0 service_ready_flag : 1 stable_captain : 1 Cluster Manager(s): https://spl-ms01.domain.com:8089 splunk_version: 9.0.0.1 Members: spl-sh02 kvstore_status : ready label : spl-sh02 manual_detention : off mgmt_uri : https://domain.com:8089 mgmt_uri_alias : https://172.28.56.104:8089 out_of_sync_node : 0 preferred_captain : 1 restart_required : 0 splunk_version : 9.0.0.1 status : Up spl-sh01 kvstore_status : ready label : spl-sh01 last_conf_replication : Wed Jan 25 10:52:26 2023 manual_detention : off mgmt_uri : https://spl-sh01.domain.com:8089 mgmt_uri_alias : https://172.28.56.100:8089 out_of_sync_node : 0 preferred_captain : 1 restart_required : 0 splunk_version : 9.0.0.1 status : Up   When i'm try to execute "apply shcluster-bundle" on deployer i'm see this error:   Warning: Depending on the configuration changes being pushed, this command might initiate a rolling restart of the cluster members. Please refer to the documentation for the details. Do you wish to continue? [y/n]: y WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details. Error in pre-deploy check, uri=https://spl-sh02.domain.com:8089/services/shcluster/captain/kvstore-upgrade/status, status=401, error=No error   How i can solve this problem? 
Hello Splunkers, I the following error on my Splunk HF which is listening to incoming data from F5 network appliance.   01-25-2023 08:06:56.794 +0000 ERROR TcpInputProc [2612981 FwdDataReceiverThr... See more...
Hello Splunkers, I the following error on my Splunk HF which is listening to incoming data from F5 network appliance.   01-25-2023 08:06:56.794 +0000 ERROR TcpInputProc [2612981 FwdDataReceiverThread] - Error encountered for connection from src=<internal_ip_f5>:59697. Read Timeout Timed out after 600 seconds.   I am wondering what the number after the F5 IP is... I specified a unique port for the forwarding of data between f5 and the HF so I do not understand why I have number like 59697 (and many others). More generally I do not know how to troubleshoot this... Thanks for your help, GaetanVP
I have following splunk query (index=index_1 OR index=index_2) sourcetype=openshift_logs openshift_namespace="my_ns" openshift_cluster="*" | spath "message.url" | search "message.url"="/dummy/url/v... See more...
I have following splunk query (index=index_1 OR index=index_2) sourcetype=openshift_logs openshift_namespace="my_ns" openshift_cluster="*" | spath "message.url" | search "message.url"="/dummy/url/v1*" | search "message.tracers.ke-channel{}"="*" |search "message.jsonObject.payments{}.products{}.type"=GROCERY | dedup message.tracers.ke-correlation-id{} | search "message.statusCode"<400 |rename "message.jsonObject.payments{}.orderStatus.status" AS "ORDER_STATUS"| top limit=50 "ORDER_STATUS" which gives the below output ORDER_STATUS count percent ----------------------------------- PAYMENT_ACCEPTED 500 70 PAYMENT_PENDING 100 20 PAYMENT_UNDER_REVIEW 90 2 PAYMENT_REDIRECTION 40 1.32 PAYMENT_NOT_ATTEMPTED10 3.11 I want to display another item in the dashbaord which should be the sum of the count of following order status: PAYMENT_ACCEPTED+PAYMENT_PENDING+PAYMENT_UNDER_REVIEW+PAYMENT_REDIRECTION i.e 500 + 100+90+40=730 Below is my query: (index=index_1 OR index=federated:index_2) sourcetype=openshift_logs openshift_namespace="my_ns" openshift_cluster="*" | spath "message.url" | search "message.url"="/dummy/url/v1*" | search "message.tracers.ke-channel{}"="*" |search "message.jsonObject.payments{}.products{}.type"=GROCERY | search "message.statusCode"<400 | dedup message.jsonObject.id |search ("message.jsonObject.payments{}.orderStatus.status"="PAYMENT_ACCEPTED" OR "message.jsonObject.payments{}.orderStatus.status"="PAYMENT_PENDING" OR "message.jsonObject.payments{}.orderStatus.status"="PAYMENT_UNDER_REVIEW" OR "message.jsonObject.payments{}.orderStatus.status"="PAYMENT_REDIRECTION") | stats count(message.jsonObject.id) But the sum of the count using the above query is always more than the actual total count. Appreciate if someone can let me know where am i going wrong.