All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I am creating a Dashboard and using the Dashboard Studio template, and previously I developed a Splunk Visualization. How can I define a Splunk Visualization on Dashboard Studio? Because by d... See more...
Hi, I am creating a Dashboard and using the Dashboard Studio template, and previously I developed a Splunk Visualization. How can I define a Splunk Visualization on Dashboard Studio? Because by default, I can only choose from the available Splunk Visualizations that Splunk has provided.
Hi, Does anyone out there use any archiving software to monitor, report and manage frozen bucket storage in an on-prem archive storage location? I have found https://www.conducivesi.com/splunk-arch... See more...
Hi, Does anyone out there use any archiving software to monitor, report and manage frozen bucket storage in an on-prem archive storage location? I have found https://www.conducivesi.com/splunk-archiver-vsl to fit our requirements but we are interested in looking at other options.  Thanks 
The Microsoft Teams Add-On for Splunk shows a Microsoft Teams tab for Microsoft 365 App for Splunk, however I do not see that on the app. Has it been removed, or am I missing something? Possibly some... See more...
The Microsoft Teams Add-On for Splunk shows a Microsoft Teams tab for Microsoft 365 App for Splunk, however I do not see that on the app. Has it been removed, or am I missing something? Possibly something is available in On Prem but not Cloud?
I am looking to represent stats for the 5 minutes before and after the hour for an entire day/timeperiod.  The search below will work but still breaks up the times into 5 minute chunks as it crosses ... See more...
I am looking to represent stats for the 5 minutes before and after the hour for an entire day/timeperiod.  The search below will work but still breaks up the times into 5 minute chunks as it crosses the top of the hour. Is there a better way to search? index=main (earliest=01/08/2024:08:55:00 latest=01/08/2024:09:05:00) OR (earliest=01/08/2024:09:55:00 latest=01/08/2024:10:05:00) OR (earliest=01/08/2024:10:55:00 latest=01/08/2024:11:05:00) | bin _time span=10m | stats count by _time Received results  
This started out as a question, but is now just an FYI.  Similar to this post, this week I received a old vulnerability notice from Tenable about my Splunk instance.  We'd previously remediated this ... See more...
This started out as a question, but is now just an FYI.  Similar to this post, this week I received a old vulnerability notice from Tenable about my Splunk instance.  We'd previously remediated this issue, so it was weird that it showed up again suddenly. Vulnerability details: https://packetstormsecurity.com/files/144879/Splunk-6.6.x-Local-Privilege-Escalation.html https://advisory.splunk.com/advisories/SP-CAAAP3M?301=/view/SP-CAAAP3M https://www.tenable.com/plugins/nessus/104498 The details in the articles are light, except saying to review the directions here for running Splunk as non-root: https://docs.splunk.com/Documentation/Splunk/9.1.2/Installation/RunSplunkasadifferentornon-rootuser Tenable also doesn't give details about exactly what it saw...it just says, "The current configuration of the host running Splunk was found to be vulnerable to a local privilege escalation vulnerability."   My OS is RHEL 7.x.  I'm launching Splunk using systemd with a non-root user and I have no init.d related files for Splunk.   My understanding is that launching with systemd eliminates the issue, since this way, Splunk never starts with root credentials anyway. Per Splunk's own advisory, any Splunk system is vulnerable, if: Satisfied one of the following conditions a. A Splunk init script created via $SPLUNK_HOME/bin/splunk enable boot-start –user on Splunk 6.1.x or later. b. A line with SPLUNK_OS_USER= exists in $SPLUNK_HOME/etc/splunk-launch.conf In my case, this is an old server and at one point, we did run the boot start command, which made changes to the $SPLUNK_HOME/etc/splunk-launch.conf line that sets the SPLUNK_OS_USER.  Although we had commented out the launch line, the Tenable regex is apparently broken and doesn't realize the line was disabled with a hash.  Removing the line entirely made Tenable stop reporting the vulnerability.  I assume their regex was only looking for "SPLUNK_OS_USER=<something>" so it missed the hash. Anyway, hope this helps someone.        
I need to migrate our cluster master to a new machine. It currently has these roles: Cluster Master Deployment Server Indexer License Master Search Head SHC Deployer I already migrated the L... See more...
I need to migrate our cluster master to a new machine. It currently has these roles: Cluster Master Deployment Server Indexer License Master Search Head SHC Deployer I already migrated the License master role to the new server and it's working fine. I've been trying to follow the documentation here: https://docs.splunk.com/Documentation/Splunk/8.2.2/Indexer/Handlemanagernodefailure From what I gather, I need to copy all the files in /opt/splunk/etc/deployment-apps, /opt/splunk/etc/shcluster and /opt/splunk/etc/master-apps, plus anything that's in /opt/splunk/etc/system/local. Then add the passwords in plain text to the server.conf in the  local folder, restart Splunk on the new host and point all peers and search heads to the new master in their respective local server.conf files.  Is there anything else that needs done or would this take care of switching the cluster master entirely? And is there a specific order in which to do things?
Hello, We set HEC http input for several flows of data and related tokens, and we added ACK feature to this configuration. (following https://docs.splunk.com/Documentation/Splunk/9.1.2/Data/AboutHE... See more...
Hello, We set HEC http input for several flows of data and related tokens, and we added ACK feature to this configuration. (following https://docs.splunk.com/Documentation/Splunk/9.1.2/Data/AboutHECIDXAck) We work with a distributed infra, 1 Search Head, two indexers (no cluster) All was Ok with HEC but after some time we got our first error event : ERROR HttpInputDataHandler [2576842 HttpDedicatedIoThread-0] - Failed processing http input, token name=XXXX [...] reply=9, events_processed=0 INFO HttpInputDataHandler [2576844 HttpDedicatedIoThread-2] - HttpInputAckService not in healthy state. The maximum number of ACKed requests pending query has been reached. Server busy error (reply=9) leads to unavailability of HEC, but only for the token(s) where maximum number of ACKed requests pending query have been reached. Restarting the indexer is enough to get rid of the problem, but after many logs have been lost. We did some search and tried to customize some settings, but we only succeeded in delaying the 'server busy' problem (1 week to 1 month). Has anyone experienced the same problem ? How can we avoid increasing those pending query counter ? Thanks a lot for any help. etc/system/local/limits.conf [http_input] # The max number of ACK channels. max_number_of_ack_channel = 1000000 # The max number of acked requests pending query. max_number_of_acked_requests_pending_query = 10000000 # The max number of acked requests pending query per ACK channel. max_number_of_acked_requests_pending_query_per_ack_channel = 4000000 etc/system/local/server.conf [queue=parsingQueue] maxSize=10MB maxEventSize = 20MB maxIdleTime = 400 channel_cookie = AppGwAffinity (this one because we are using load balancer, so cookie is also set on LB)
Hi Team, We are running Splunk v9.1.1 and need to upgrade PCI app from v4.0.0 to v5.3.0 I am trying to find out the upgrade path i.e to which version it has to be before it upgraded to 5.3.0 
Hi Team, Hope this finds all well. I am trying to create a alert search query and need to create the splunk url as a dynamic value. Here is my search query- index=idx-cloud-azure "c899b9d3-bf20-4... See more...
Hi Team, Hope this finds all well. I am trying to create a alert search query and need to create the splunk url as a dynamic value. Here is my search query- index=idx-cloud-azure "c899b9d3-bf20-4fd6-8b31-60aa05a14caa" metricName="CpuPercentage" | eval CPU_Percent=round((average/maximum)*100,2) | where CPU_Percent > 85 | stats earliest(_time) AS early_time latest(_time) AS late_time latest(CPU_Percent) AS CPU_Percent by amdl_ResourceName | eval InstanceName="GSASMonitoring.High.CPU.Percentage" | lookup Stores_IncidentAssignmentGroup_Reference InstanceName | eval Minutes=(threshold/60) | where Enabled=1 | eval short_description="GSAS App Service Plan High CPU", comments="GSAS Monitoring: High CPU Percentage ".CPU_Percent. " has been recorded" ```splunk url=""``` | eval key=InstanceName."-".amdl_ResourceName | lookup Stores_SNOWIntegration_IncidentTracker _key as key OUTPUT _time as last_incident_time | eval last_incident_time=coalesce(last_incident_time,0) | where (late_time > last_incident_time + threshold) | join type=left key [| inputlookup Stores_OpenIncidents | rex field=correlation_id "(?<key>(.*))(?=\_\w+\-?\w+\_?)"] | where ISNULL(dv_state) | eval correlation_id=coalesce(correlation_id,key."_".late_time) | rename key as _key | table short_description comments InstanceName category subcategory contact_type assignment_group impact urgency correlation_id account _key location and here is the url of the entire search I am trying to convert into dynamic for line no 11- https://tjxprod.splunkcloud.com/en-US/app/stores/search?dispatch.sample_ratio=1&display.page.search.mode=verbose&q=search%20index%3Didx-cloud-azure%20%22c899b9d3-bf20-4fd6-8b31-60aa05a14caa%22%20metricName%3D%22CpuPercentage%22%0A%7C%20eval%20CPU_Percent%3Dround((average%2Fmaximum)*100%2C2)%0A%7C%20where%20CPU_Percent%20%3E%2085%0A%7C%20stats%20earliest(_time)%20AS%20early_time%20latest(_time)%20AS%20late_time%20latest(CPU_Percent)%20AS%20CPU_Percent%20by%20amdl_ResourceName%0A%7C%20eval%20InstanceName%3D%22GSASMonitoring.High.CPU.Percentage%22%0A%7C%20lookup%20Stores_IncidentAssignmentGroup_Reference%20InstanceName%0A%7C%20eval%20Minutes%3D(threshold%2F60)%0A%7C%20where%20Enabled%3D1%0A%7C%20eval%20short_description%3D%22GSAS%20App%20Service%20Plan%20High%20CPU%22%2C%0A%20%20%20%20%20%20%20comments%3D%22GSAS%20Monitoring%3A%20High%20CPU%20Percentage%20%22.CPU_Percent.%20%22%20has%20been%20recorded%22%0A%7C%20eval%20key%3DInstanceName.%22-%22.amdl_ResourceName%0A%7C%20lookup%20Stores_SNOWIntegration_IncidentTracker%20_key%20as%20key%20OUTPUT%20_time%20as%20last_incident_time%0A%7C%20eval%20last_incident_time%3Dcoalesce(last_incident_time%2C0)%0A%7C%20where%20(late_time%20%3E%20last_incident_time%20%2B%20threshold)%0A%7C%20join%20type%3Dleft%20key%20%0A%20%20%20%20%5B%7C%20inputlookup%20Stores_OpenIncidents%20%0A%20%20%20%20%7C%20rex%20field%3Dcorrelation_id%20%22(%3F%3Ckey%3E(.*))(%3F%3D%5C_%5Cw%2B%5C-%3F%5Cw%2B%5C_%3F)%22%5D%20%0A%7C%20where%20ISNULL(dv_state)%0A%7C%20eval%20correlation_id%3Dcoalesce(correlation_id%2Ckey.%22_%22.late_time)%20%0A%7C%20rename%20key%20as%20_key%0A%7C%20table%20short_description%20comments%20InstanceName%20category%20subcategory%20contact_type%20assignment_group%20impact%20urgency%20correlation_id%20account%20_key%20location&earliest=-60m%40m&latest=now&display.page.search.tab=statistics&display.general.type=statistics&sid=1704721919.326369_52C57BD9-5296-4397-B370-BF36A375A0A5
  Hi, my employer uses Splunk Enterprise v9.1.2 which is running On-Prem. We have recently enabled SSO with Azure. After enabling SSO we noticed that authentication to the REST API no longer worke... See more...
  Hi, my employer uses Splunk Enterprise v9.1.2 which is running On-Prem. We have recently enabled SSO with Azure. After enabling SSO we noticed that authentication to the REST API no longer worked with PAT tokens or username/password authentication methods. I created an Authentication Extension script using the example SAML_script_azure.py script. I implemented the getUserInfo() function which has allowed users to authenticate to the REST API and CLI commands with PAT tokens. However, I have been unable to make username/password authentication work with the REST API or CLI since I enabled SSO. I tried adding a login() function to my Authentication Extension script but it does not work. The option for "Allow Token Based Authentication Only" is set to false. The login() function is not called when a user sends a request to API with username/password like this example:         curl --location 'https://mysplunkserver.company.com:8089/services/search/jobs?output_mode=json' --header 'Content-Type: text/plain' --data search="search index=main | head 1 " -u me         These are the documentation pages I have been referencing: https://docs.splunk.com/Documentation/Splunk/9.1.2/Security/ConfigureauthextensionsforSAMLtokens  https://docs.splunk.com/Documentation/Splunk/9.1.2/Security/Createtheauthenticationscript    It is possible to use username/password for API and CLI authentication with SSO enabled?
Hello All, I need to fetch the dates in the past 7 days where events are lesser than average event count. I used the below SPL: - |tstats count where index=index_name sourcetype=xxx BY _time span=... See more...
Hello All, I need to fetch the dates in the past 7 days where events are lesser than average event count. I used the below SPL: - |tstats count where index=index_name sourcetype=xxx BY _time span=1d |eventstats avg(count) AS avg_count However, in scenario where on a particular day no events are ingested, the result skips those dates, that is does not return the dates with event count as zero. For example: It skips showing the highlighted rows in the below table: - _time count avg_count 2024-01-01 0 240 2024-01-02 240 240 2024-01-03 0 240 2024-01-04 0 240 2024-01-05 240 240 2024-01-06 240 240 2024-01-07 0 240   And gives below as the result: - _time count event_count 2024-01-02 240 240 2024-01-05 240 240 2024-01-06 240 240   Thus, need your guidance to resolve this problem. Thanking you Taruchit
Hi, Were currently deploying our internal Splunk instance and were looking for a way to monitoring the data sources that have logged in our instance. I saw that previously there was a Data Summary... See more...
Hi, Were currently deploying our internal Splunk instance and were looking for a way to monitoring the data sources that have logged in our instance. I saw that previously there was a Data Summary button on the Search and Reporting app but for some reason its not showing up on our instance. Does anyone know if it got removed or moved to somewhere else. Thank you
Hi, Splunk hasn't captured the 4743 events, indicating computer account deletions that occurred yesterday at 2 pm. Where should we investigate to determine the root cause? Thanks
Hi Team, I was looking to configure the custom command execution like getting the output of ps -ef command or the mq query count. Can some one please help on how to create a monitoring for the same... See more...
Hi Team, I was looking to configure the custom command execution like getting the output of ps -ef command or the mq query count. Can some one please help on how to create a monitoring for the same. The command which i want to configure are the normal Linux commands which are executed on the server using putty like "ps -ef | grep -i otel" and others
Hello, Splunkers!   In the current test environment, System A functions as the master license server, while System B operates as the slave utilizing System A's license. Unfortunately, Sy... See more...
Hello, Splunkers!   In the current test environment, System A functions as the master license server, while System B operates as the slave utilizing System A's license. Unfortunately, System B lacks direct access to System A's master server. When executing the query below on System B, it yields the message "license usage logging not available for slave licensing instances" in the search results: spl index=_internal source="*license_usage.log"   Is there a method to check licensing usage per index through a search query on the Indexer server in System B? Your assistance is greatly appreciated! Thank you in advance.  
We are still running Enterprise 6.1, and I am unable to locate the relevant documentation. I would like to know if I can access the job.sid using simple xml, and if so what the syntax might be. I g... See more...
We are still running Enterprise 6.1, and I am unable to locate the relevant documentation. I would like to know if I can access the job.sid using simple xml, and if so what the syntax might be. I gather I am restricted to the <searchString> element as <search> & <query> are not relevant to version 6.1, Any assistance would be most appreciated.
Hello Splunkers, I have an Architecture related question if someone can help with it please. My Architecture is like , Log Source(Linux Server)> Heavy Forwarder>Indexer  Lets say I'm on-boarding a... See more...
Hello Splunkers, I have an Architecture related question if someone can help with it please. My Architecture is like , Log Source(Linux Server)> Heavy Forwarder>Indexer  Lets say I'm on-boarding a New log source, When I'm installing an UF on my Linux server , it connects back to my Deployment Server and get the APP(Linux TA) and the output.conf APP which is basically my Heavy Forwarder details. Now my question is Do I need to have the same Linux_TA installed on my Heavy Forwarder And Indexer too ? Or as long as this TA is on Log source, it is sufficient. Hope I have explained well. Thanks for looking into this and I greatly appreciate your input. regards, Moh.   
My teacher gave me this task: "You need to apply at least 3 different use cases that we will change according to your scenario. Show various use cases on the Dashboards you create. You can refer to... See more...
My teacher gave me this task: "You need to apply at least 3 different use cases that we will change according to your scenario. Show various use cases on the Dashboards you create. You can refer to sample use cases on the Internet or in the Security Essentials application on Splunk." but I don't know how to do this task. He gives us an empty Splunk Server and this task. How can I create a use-case scenario? Thank you for your time...
Dear Team, How to implement AppDynamics on an application that is running on Google Cloud? Regards, nokawif
Hi Folks, I wanted to restore a chunk of a data (jan 2023-aug 2023) from a specific index, we do use splunk cloud and use splunk's restore services. total size of data from jan to aug: >1700GB our... See more...
Hi Folks, I wanted to restore a chunk of a data (jan 2023-aug 2023) from a specific index, we do use splunk cloud and use splunk's restore services. total size of data from jan to aug: >1700GB our licensee : 800 GB per day will splunk reindex those data?? should I do in chunk?? I'm aware of the limitation of 10% of total archive (I'm very new to splunk tough,So correct me.)WHAT WOULD BE WAY TO GO?