All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all when i run my original query i am getting one result and when i execute the same query using tstats i am getting different output. AVG IS NOT MATCHING. how to modify the query to match the ... See more...
Hi all when i run my original query i am getting one result and when i execute the same query using tstats i am getting different output. AVG IS NOT MATCHING. how to modify the query to match the count. my original query:     index=apl-cly-sap sourcetype=cly:app:sap |search processName="applicationstatus" |stats avg(plantime)     output: 1233.43223454   tstats query:     |tstats count where index=apl-cly-sap sourcetype=cly:app:sap TERM(processName=applicationstatus) by PREFIX(plantime=) |rename plantime= as Time |stats avg(Time)     output: 1345.7658755
In my team we have completed a Jenkins + splunk installation. So far we can see all the logs that comes from Jenkins job logs.  Is it possible to send jenkins custom loggers into splunk (Jenkins C... See more...
In my team we have completed a Jenkins + splunk installation. So far we can see all the logs that comes from Jenkins job logs.  Is it possible to send jenkins custom loggers into splunk (Jenkins Custom logger example https://docs.cloudbees.com/docs/cloudbees-ci-kb/latest/client-and-managed-masters/how-do-i-create-a-logger-in-jenkins-for-troubleshooting-and-diagnostic-information)? custom loggers are already in disk in:  `jenkins_home/log` path Thank you so much for your help!
I have a list of chrome extensions that are installed that is returned in a multivalue field. One of the results looks like this:  All I really care about is the extension name so I was able to... See more...
I have a list of chrome extensions that are installed that is returned in a multivalue field. One of the results looks like this:  All I really care about is the extension name so I was able to run this query to use rex to extract the names of all the extensions:  index=jamf source=jss_inventory "extensionAttribute.name"="Installed Chrome Extensions & Versions" | fields extensionAttribute.name, computer_meta.assignedUser | rex field=extensionAttribute.value max_match=0 "Name: (?<extensions>.*)\n" | table extensions This returns:  How can I further extract these extensions in this multi-value field.  I can't get mvexpand to work because it says that the new extensions field I created doesn't exist in the data. I can't figure out how to extract each line as a separate result so that I can dedup and get a full list of all installed extensions. 
Hello everyone, I am new with SNOW TA and I am trying to find how can I schedule polling time. I have the collection interval set to poll once a day, but I would like to schedule it to run some time ... See more...
Hello everyone, I am new with SNOW TA and I am trying to find how can I schedule polling time. I have the collection interval set to poll once a day, but I would like to schedule it to run some time at night.
Hi guys,   Do we have an option to store data forever in either of buckets (warm or cold) for particular index ?   If yes can some share the stanza .
Hi, I have a bar graph which is not showing full y-axis value when the value have more words. Is there a way we can force dashboard to display full value   Example: Instead of "Monthly Patch Up... See more...
Hi, I have a bar graph which is not showing full y-axis value when the value have more words. Is there a way we can force dashboard to display full value   Example: Instead of "Monthly Patch Update" it is displaying as "Month...pdate"
Hi, I have below search, i have clubbed 3 searches into 1. Each individual search is working fine but when i clubbed its not able to pull data from previous year and the table shows empty values fo... See more...
Hi, I have below search, i have clubbed 3 searches into 1. Each individual search is working fine but when i clubbed its not able to pull data from previous year and the table shows empty values fore few months   index=dev AND "alpha" | dedup _time| eval Month=strftime(_time,"%m %b %Y")|stats count by Month| rename count as alpha | appendcols [search index=DEV AND "[beta]" | dedup _time|eval Month=strftime(_time,"%m %b %Y")|stats count by Month| rename count as beta] | appendcols [search index=dev AND "gamma" | dedup _time| eval Month=strftime(_time,"%m %b %Y")|stats count by Month| rename count as gamma]  
Query doesnt bring up anything. Try to pull RDP connections in my environment:      event_simpleName=UserLogon LogonType_decimal=10 | stats values(UserName) dc(UserName) AS "User Count" coun... See more...
Query doesnt bring up anything. Try to pull RDP connections in my environment:      event_simpleName=UserLogon LogonType_decimal=10 | stats values(UserName) dc(UserName) AS "User Count" count(UserName) AS "Logon Count" by aid, ComputerName | sort - "Logon Count"  
Hello,    we have a license Splunk, and running into the below error restricting search to internal indexes only (reason: [DISABLED_DUE_TO_VIOLATION,0]) how do i contact the Splunk team to get ... See more...
Hello,    we have a license Splunk, and running into the below error restricting search to internal indexes only (reason: [DISABLED_DUE_TO_VIOLATION,0]) how do i contact the Splunk team to get by license reset temporarily? I do not have the email or login portal of the account holder who initially created the Splunk cluster to contact the support
Getting the following error when trying to configure this on the on-prem instance of Splunk:   Does anyone have an answer as to why this is happening?   
Hello everyone,  I have a question for you, and I need your help please I have some logs, but the parsing isn't done.  In a same log, I have a lot of indicators and I need to extract the fie... See more...
Hello everyone,  I have a question for you, and I need your help please I have some logs, but the parsing isn't done.  In a same log, I have a lot of indicators and I need to extract the fields : -cpu_model - device_type: -distinguished_name: - entity:  - last_boot_duration:  - last_ip_address:  - last_logon_duration:  - last_logon_time:  -   last_system_boot:     -  mac_addresses: [ 00:42:38:CA:81:72 00:42:38:CA:81:7300:42:38:CA:81:76          02:42:38:CA:81:72          74:78:27:91:41:BB          B0:9F:80:55:40:44         ]       - name: PCW-TOU-76566        -number_of_days_since_last_boot:       - number_of_days_since_last_logon:       -  number_of_monitors: 3        - os_version_and_architecture: Windows 10 Pro 21H2 (64 bits)       - platform: windows       - score:Device performance/Boot speed: null        -system_drive_capacity: 506333229056      -  system_drive_usage: 0.19       - total_nonsystem_drive_capacity: 0        -total_nonsystem_drive_usage: null        -total_ram: 8589934592   The log is like this : What can I do to have the fields extracted to develop my indicators ?  The regex method is not possible in this case, can I use rex command ? and how I can do for this example ?  I need your help, thank you so much 
Hi, Please could you help with parsing this json data to table       { "list_element": [ { "element": "{\"var1\":\"1.1.8.8:443\",\"var2\":\"1188\"}" }, { "element": "{\"var1\"... See more...
Hi, Please could you help with parsing this json data to table       { "list_element": [ { "element": "{\"var1\":\"1.1.8.8:443\",\"var2\":\"1188\"}" }, { "element": "{\"var1\":\"8.8.1.1:443\",\"var2\":\"8811\"}" }, { "element": "{\"var1\":\"1.2.3.4:443\",\"var2\":\"1234\"}" } ] }       The result should look like: var1 var2 1.1.8.8:443 1188 8.8.1.1:443 8811 1.2.3.4:443 1234
Hi guys. I'm currently working to fix all "real-time" jobs running on my company and I came across one job that I can't find it's original parent.. It's running every 10/15 minutes and takes resour... See more...
Hi guys. I'm currently working to fix all "real-time" jobs running on my company and I came across one job that I can't find it's original parent.. It's running every 10/15 minutes and takes resources. I was hoping you could assist me with finding the original parent of this job. This is what I have: - Owner - The query itself - Sharing (global) - Job inspect page - The app it's running on (Security Enterprise)   Thank you for your time! 
Hey Splunk Community!   Working on a dashboard ( For Incident Response) in splunk but need some assistance initially with queries on the following in Splunk: Computer or host showing if malic... See more...
Hey Splunk Community!   Working on a dashboard ( For Incident Response) in splunk but need some assistance initially with queries on the following in Splunk: Computer or host showing if malicious Logon info for other machines that a user has logged in for the ay IP address of machine, Location or Country, Is it a VM, and Laptop Active Directory info on user Remote machine name - to find out what machine was used to remote into the Server on the last incident Need this soon, would be appreciated. Thanks Very much!
After installing Splunk using the generated Ansible playbook the service can't start. There is no error message and I cannot find any logs. How can I troubleshoot this?
I have an idea of what logs can be collected by Universal Forwarder (Example - Application, Security, System, Forwarded event logs, Performance monitoring). But I want to know what exactly it collect... See more...
I have an idea of what logs can be collected by Universal Forwarder (Example - Application, Security, System, Forwarded event logs, Performance monitoring). But I want to know what exactly it collects in all those categories.
Good Morning, I have been working on a task to gather the free disk space of servers we have Splunk Universal Forwarder on. I am down to getting data from all servers through the perfmon data. I ha... See more...
Good Morning, I have been working on a task to gather the free disk space of servers we have Splunk Universal Forwarder on. I am down to getting data from all servers through the perfmon data. I have it for all servers but two. One of these is the Splunk deployment server (we're on Splunk Cloud). I have checked all the apps which might have inputs.conf with stanzas referring to "source="Perfmon:Free Disk Space" and I've looked in /etc/system/local on the Deployment Server. All the stanzas are at 0 and I've restarted Splunk after each change, I'm at a loss! Thank you in advance. Scott
Hello The deployment-client setting is required for the remote Universal Forwarder. And then I want to restart Universal Forwarder. I know the ID/PW. Can I set it up in My Deployment-Server?  
I'm creating an alert that will search for two separate string values with the OR condition inside the search. Is there a way to setup the alert condition to fire for 'If the second event is not foun... See more...
I'm creating an alert that will search for two separate string values with the OR condition inside the search. Is there a way to setup the alert condition to fire for 'If the second event is not found within 5 minutes of the first event, fire the alert.'?  The events happen anytime within a 6 hour window, so having it search every 5 minutes for a count under 2 would fire alerts constantly.
We have an SHC cluster on enterprise Version 7.3.5 & ITSI 4.4. Recently we trigged to upgrade our ITSI from 4.4.X to 4.7.0 and it failed. It was an generic error message and support was not able to... See more...
We have an SHC cluster on enterprise Version 7.3.5 & ITSI 4.4. Recently we trigged to upgrade our ITSI from 4.4.X to 4.7.0 and it failed. It was an generic error message and support was not able to find the root cause. So we are now trying to build a new SHC in parallel (Same Version as Original one) and connect it to the same Indexer cluster. We want to make sure original cluster is working fine until we are sure that new SHC is an exact replica. 1] Is there an issue having new different SHC connected to a same index cluster? 2] How do we migrate all the data from one SHC to another including the ITSI correlation Search , Dashboards, Lookup table, Entities etc. 3] Upon Migration can we upgrade ITSI to 4.7 and above  on new SHC?