All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello everyone,  I have a question for you, and I need your help please I have some logs, but the parsing isn't done.  In a same log, I have a lot of indicators and I need to extract the fie... See more...
Hello everyone,  I have a question for you, and I need your help please I have some logs, but the parsing isn't done.  In a same log, I have a lot of indicators and I need to extract the fields : -cpu_model - device_type: -distinguished_name: - entity:  - last_boot_duration:  - last_ip_address:  - last_logon_duration:  - last_logon_time:  -   last_system_boot:     -  mac_addresses: [ 00:42:38:CA:81:72 00:42:38:CA:81:7300:42:38:CA:81:76          02:42:38:CA:81:72          74:78:27:91:41:BB          B0:9F:80:55:40:44         ]       - name: PCW-TOU-76566        -number_of_days_since_last_boot:       - number_of_days_since_last_logon:       -  number_of_monitors: 3        - os_version_and_architecture: Windows 10 Pro 21H2 (64 bits)       - platform: windows       - score:Device performance/Boot speed: null        -system_drive_capacity: 506333229056      -  system_drive_usage: 0.19       - total_nonsystem_drive_capacity: 0        -total_nonsystem_drive_usage: null        -total_ram: 8589934592   The log is like this : What can I do to have the fields extracted to develop my indicators ?  The regex method is not possible in this case, can I use rex command ? and how I can do for this example ?  I need your help, thank you so much 
Hi, Please could you help with parsing this json data to table       { "list_element": [ { "element": "{\"var1\":\"1.1.8.8:443\",\"var2\":\"1188\"}" }, { "element": "{\"var1\"... See more...
Hi, Please could you help with parsing this json data to table       { "list_element": [ { "element": "{\"var1\":\"1.1.8.8:443\",\"var2\":\"1188\"}" }, { "element": "{\"var1\":\"8.8.1.1:443\",\"var2\":\"8811\"}" }, { "element": "{\"var1\":\"1.2.3.4:443\",\"var2\":\"1234\"}" } ] }       The result should look like: var1 var2 1.1.8.8:443 1188 8.8.1.1:443 8811 1.2.3.4:443 1234
Hi guys. I'm currently working to fix all "real-time" jobs running on my company and I came across one job that I can't find it's original parent.. It's running every 10/15 minutes and takes resour... See more...
Hi guys. I'm currently working to fix all "real-time" jobs running on my company and I came across one job that I can't find it's original parent.. It's running every 10/15 minutes and takes resources. I was hoping you could assist me with finding the original parent of this job. This is what I have: - Owner - The query itself - Sharing (global) - Job inspect page - The app it's running on (Security Enterprise)   Thank you for your time! 
Hey Splunk Community!   Working on a dashboard ( For Incident Response) in splunk but need some assistance initially with queries on the following in Splunk: Computer or host showing if malic... See more...
Hey Splunk Community!   Working on a dashboard ( For Incident Response) in splunk but need some assistance initially with queries on the following in Splunk: Computer or host showing if malicious Logon info for other machines that a user has logged in for the ay IP address of machine, Location or Country, Is it a VM, and Laptop Active Directory info on user Remote machine name - to find out what machine was used to remote into the Server on the last incident Need this soon, would be appreciated. Thanks Very much!
After installing Splunk using the generated Ansible playbook the service can't start. There is no error message and I cannot find any logs. How can I troubleshoot this?
I have an idea of what logs can be collected by Universal Forwarder (Example - Application, Security, System, Forwarded event logs, Performance monitoring). But I want to know what exactly it collect... See more...
I have an idea of what logs can be collected by Universal Forwarder (Example - Application, Security, System, Forwarded event logs, Performance monitoring). But I want to know what exactly it collects in all those categories.
Good Morning, I have been working on a task to gather the free disk space of servers we have Splunk Universal Forwarder on. I am down to getting data from all servers through the perfmon data. I ha... See more...
Good Morning, I have been working on a task to gather the free disk space of servers we have Splunk Universal Forwarder on. I am down to getting data from all servers through the perfmon data. I have it for all servers but two. One of these is the Splunk deployment server (we're on Splunk Cloud). I have checked all the apps which might have inputs.conf with stanzas referring to "source="Perfmon:Free Disk Space" and I've looked in /etc/system/local on the Deployment Server. All the stanzas are at 0 and I've restarted Splunk after each change, I'm at a loss! Thank you in advance. Scott
Hello The deployment-client setting is required for the remote Universal Forwarder. And then I want to restart Universal Forwarder. I know the ID/PW. Can I set it up in My Deployment-Server?  
I'm creating an alert that will search for two separate string values with the OR condition inside the search. Is there a way to setup the alert condition to fire for 'If the second event is not foun... See more...
I'm creating an alert that will search for two separate string values with the OR condition inside the search. Is there a way to setup the alert condition to fire for 'If the second event is not found within 5 minutes of the first event, fire the alert.'?  The events happen anytime within a 6 hour window, so having it search every 5 minutes for a count under 2 would fire alerts constantly.
We have an SHC cluster on enterprise Version 7.3.5 & ITSI 4.4. Recently we trigged to upgrade our ITSI from 4.4.X to 4.7.0 and it failed. It was an generic error message and support was not able to... See more...
We have an SHC cluster on enterprise Version 7.3.5 & ITSI 4.4. Recently we trigged to upgrade our ITSI from 4.4.X to 4.7.0 and it failed. It was an generic error message and support was not able to find the root cause. So we are now trying to build a new SHC in parallel (Same Version as Original one) and connect it to the same Indexer cluster. We want to make sure original cluster is working fine until we are sure that new SHC is an exact replica. 1] Is there an issue having new different SHC connected to a same index cluster? 2] How do we migrate all the data from one SHC to another including the ITSI correlation Search , Dashboards, Lookup table, Entities etc. 3] Upon Migration can we upgrade ITSI to 4.7 and above  on new SHC?
good day I am currently using Splunk Cloud Version: 9.0.2208.4, when using the Carousel Viz visualization it works normally on any dashboard, but when I add the script="simple_xml_examples:tokenlin... See more...
good day I am currently using Splunk Cloud Version: 9.0.2208.4, when using the Carousel Viz visualization it works normally on any dashboard, but when I add the script="simple_xml_examples:tokenlinks.js" reference to use dashboards via token I get an error "Error rendering Carousel Viz visualization", has anyone else experienced this? previously used it in splunk enterprise without any problem Regards
Hi All, I am planning to implement SmartStore in my current splunk environment and wanted to know if i am only doing it for few indexes in my cluster do i still have to change and make the SF=RF in... See more...
Hi All, I am planning to implement SmartStore in my current splunk environment and wanted to know if i am only doing it for few indexes in my cluster do i still have to change and make the SF=RF in my cluster master. Also if you could list the best practices that worked for you while implementing it ,that would be helpful as well.   Thanks, Chiranjeev Singh
Dear all, We are on process of ingesting Check Point EDR logs in our Splunk Cloud Platform. This should be done through a Heavy Forwarder. Checkpoint sends encrypted data to HFW. For that purpose... See more...
Dear all, We are on process of ingesting Check Point EDR logs in our Splunk Cloud Platform. This should be done through a Heavy Forwarder. Checkpoint sends encrypted data to HFW. For that purpose, we used the following guide provided by CheckPoint for generating and configure the certificates which contains specific instructions for Splunk: https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails=&solutionid=sk122323 As summary, there are two certificates that needs to be configured on the Splunk side:  - splunk.pem It is a combination of SyslogServer.crt + SyslogServer.key + RootCa.pem configured in /opt/splunk/etc/apps/CheckPointAPP/local/inputs.conf - ca.pem configured in /opt/splunk/etc/system/local/server.conf This configuration is not working because the certificate splunk.pem is giving a handshake error "SSL alert number 40". The following setting in server.conf as the CheckPoint guide specifies, returns an error in Splunk: "Invalid key in stanza [SSL]".     [SSL] cipherSuite = TLSv1+HIGH:TLSv1.2+HIGH:@STRENGTH   We also have tried with this configuration with the same result:     [SSL] cipherSuite = ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:AES256-SHA:DHE-RSA-AES128-SHA:AES128-SHA:AES256-SHA:AES128-SHA     In Splunk internal wereceive the following error:   Received fatal SSL3 alert. ssl_state='SSLv3 read client certificate A', alert_description='unknown CA'.   Do you know which could be the point of failure? Why the certificate is returning an error 40 or if the configuration should be set in a different way? Best regards
Does everyone have same issue   I have configure add-on linux on forwarder, and import search found the incoming status   but no entity found   also any indexed data in itsi_im_... See more...
Does everyone have same issue   I have configure add-on linux on forwarder, and import search found the incoming status   but no entity found   also any indexed data in itsi_im_metrics, but why still (0)  
Please help with the query on how to compare CSV data with Splunk event and get those data in result which is not available in csv. Thanks
Looking to use the Github supplied python script @ https://github.com/imperva/incapsula-logs-downloader to fetch Incasula logs into Splunk. This requires python libraries not already in the Splunk 3.... See more...
Looking to use the Github supplied python script @ https://github.com/imperva/incapsula-logs-downloader to fetch Incasula logs into Splunk. This requires python libraries not already in the Splunk 3.7 libraries. This is on Windows 2016.  Having broken one Splunk install by installing a standalone version of Python, I would prefer to use the inbuilt python 3.7 version that comes with Splunk.  How do I import the modules to Splunk. Or is there documented best practise for installing Python outside Splunk? 
Hello everyone, I am currently making a dashboard using the free trial enterprise version of splunk. I have been trying to format my markdown text, unfortunately I have not been able to modify the... See more...
Hello everyone, I am currently making a dashboard using the free trial enterprise version of splunk. I have been trying to format my markdown text, unfortunately I have not been able to modify the size of it. However, the size selection feature in dashboard studio was added in 9.0.2208 but I cannot find it there. I tried several things in the code section but nothing works so far. Did I make something wrong, or is the feature simply not available on the free trial version ?  Thanks
Hello Splunk Community! In MLTK version 5.3.1, the streaming_apply feature was removed due to bundle replication performance issues. However, I am currently facing a problem where executing a cont... See more...
Hello Splunk Community! In MLTK version 5.3.1, the streaming_apply feature was removed due to bundle replication performance issues. However, I am currently facing a problem where executing a continuously updated model in a distributed fashion across all available search peers in our Splunk Enterprise setup would be highly beneficial. As information on this former functionality appears sparse, I wanted to inquire regarding best way to handle automatically replicating the trained model to the search peers and executing it there, if it is at all still possible. A previous question asked here (How to export/import/share ML models between Splunk instances and external system? ) hinted at manually copying the model files into the target lookup folder as an alternative to using streaming_apply. With daily updates to the model, this is sadly not an option in our deployment. Thanks for your help! Best regards LJ
Hi, I am ingesting a json data that has a property called "name". I could not figure out why I am not able to extract the "name" property as a field name. It also does not create the "name" field ev... See more...
Hi, I am ingesting a json data that has a property called "name". I could not figure out why I am not able to extract the "name" property as a field name. It also does not create the "name" field even if I use a filed extractor. I am how ever able to create a field called "name1" but if I change it back to "name" it again does not register the field. Regards, Edward
Hi, How to get the cycognito logs to splunk, is there any app available in splunkbase, let me know  thanks...