All Topics

Top

All Topics

Hello, We set HEC http input for several flows of data and related tokens, and we added ACK feature to this configuration. (following https://docs.splunk.com/Documentation/Splunk/9.1.2/Data/AboutHE... See more...
Hello, We set HEC http input for several flows of data and related tokens, and we added ACK feature to this configuration. (following https://docs.splunk.com/Documentation/Splunk/9.1.2/Data/AboutHECIDXAck) We work with a distributed infra, 1 Search Head, two indexers (no cluster) All was Ok with HEC but after some time we got our first error event : ERROR HttpInputDataHandler [2576842 HttpDedicatedIoThread-0] - Failed processing http input, token name=XXXX [...] reply=9, events_processed=0 INFO HttpInputDataHandler [2576844 HttpDedicatedIoThread-2] - HttpInputAckService not in healthy state. The maximum number of ACKed requests pending query has been reached. Server busy error (reply=9) leads to unavailability of HEC, but only for the token(s) where maximum number of ACKed requests pending query have been reached. Restarting the indexer is enough to get rid of the problem, but after many logs have been lost. We did some search and tried to customize some settings, but we only succeeded in delaying the 'server busy' problem (1 week to 1 month). Has anyone experienced the same problem ? How can we avoid increasing those pending query counter ? Thanks a lot for any help. etc/system/local/limits.conf [http_input] # The max number of ACK channels. max_number_of_ack_channel = 1000000 # The max number of acked requests pending query. max_number_of_acked_requests_pending_query = 10000000 # The max number of acked requests pending query per ACK channel. max_number_of_acked_requests_pending_query_per_ack_channel = 4000000 etc/system/local/server.conf [queue=parsingQueue] maxSize=10MB maxEventSize = 20MB maxIdleTime = 400 channel_cookie = AppGwAffinity (this one because we are using load balancer, so cookie is also set on LB)
Hi Team, We are running Splunk v9.1.1 and need to upgrade PCI app from v4.0.0 to v5.3.0 I am trying to find out the upgrade path i.e to which version it has to be before it upgraded to 5.3.0 
Hi Team, Hope this finds all well. I am trying to create a alert search query and need to create the splunk url as a dynamic value. Here is my search query- index=idx-cloud-azure "c899b9d3-bf20-4... See more...
Hi Team, Hope this finds all well. I am trying to create a alert search query and need to create the splunk url as a dynamic value. Here is my search query- index=idx-cloud-azure "c899b9d3-bf20-4fd6-8b31-60aa05a14caa" metricName="CpuPercentage" | eval CPU_Percent=round((average/maximum)*100,2) | where CPU_Percent > 85 | stats earliest(_time) AS early_time latest(_time) AS late_time latest(CPU_Percent) AS CPU_Percent by amdl_ResourceName | eval InstanceName="GSASMonitoring.High.CPU.Percentage" | lookup Stores_IncidentAssignmentGroup_Reference InstanceName | eval Minutes=(threshold/60) | where Enabled=1 | eval short_description="GSAS App Service Plan High CPU", comments="GSAS Monitoring: High CPU Percentage ".CPU_Percent. " has been recorded" ```splunk url=""``` | eval key=InstanceName."-".amdl_ResourceName | lookup Stores_SNOWIntegration_IncidentTracker _key as key OUTPUT _time as last_incident_time | eval last_incident_time=coalesce(last_incident_time,0) | where (late_time > last_incident_time + threshold) | join type=left key [| inputlookup Stores_OpenIncidents | rex field=correlation_id "(?<key>(.*))(?=\_\w+\-?\w+\_?)"] | where ISNULL(dv_state) | eval correlation_id=coalesce(correlation_id,key."_".late_time) | rename key as _key | table short_description comments InstanceName category subcategory contact_type assignment_group impact urgency correlation_id account _key location and here is the url of the entire search I am trying to convert into dynamic for line no 11- https://tjxprod.splunkcloud.com/en-US/app/stores/search?dispatch.sample_ratio=1&display.page.search.mode=verbose&q=search%20index%3Didx-cloud-azure%20%22c899b9d3-bf20-4fd6-8b31-60aa05a14caa%22%20metricName%3D%22CpuPercentage%22%0A%7C%20eval%20CPU_Percent%3Dround((average%2Fmaximum)*100%2C2)%0A%7C%20where%20CPU_Percent%20%3E%2085%0A%7C%20stats%20earliest(_time)%20AS%20early_time%20latest(_time)%20AS%20late_time%20latest(CPU_Percent)%20AS%20CPU_Percent%20by%20amdl_ResourceName%0A%7C%20eval%20InstanceName%3D%22GSASMonitoring.High.CPU.Percentage%22%0A%7C%20lookup%20Stores_IncidentAssignmentGroup_Reference%20InstanceName%0A%7C%20eval%20Minutes%3D(threshold%2F60)%0A%7C%20where%20Enabled%3D1%0A%7C%20eval%20short_description%3D%22GSAS%20App%20Service%20Plan%20High%20CPU%22%2C%0A%20%20%20%20%20%20%20comments%3D%22GSAS%20Monitoring%3A%20High%20CPU%20Percentage%20%22.CPU_Percent.%20%22%20has%20been%20recorded%22%0A%7C%20eval%20key%3DInstanceName.%22-%22.amdl_ResourceName%0A%7C%20lookup%20Stores_SNOWIntegration_IncidentTracker%20_key%20as%20key%20OUTPUT%20_time%20as%20last_incident_time%0A%7C%20eval%20last_incident_time%3Dcoalesce(last_incident_time%2C0)%0A%7C%20where%20(late_time%20%3E%20last_incident_time%20%2B%20threshold)%0A%7C%20join%20type%3Dleft%20key%20%0A%20%20%20%20%5B%7C%20inputlookup%20Stores_OpenIncidents%20%0A%20%20%20%20%7C%20rex%20field%3Dcorrelation_id%20%22(%3F%3Ckey%3E(.*))(%3F%3D%5C_%5Cw%2B%5C-%3F%5Cw%2B%5C_%3F)%22%5D%20%0A%7C%20where%20ISNULL(dv_state)%0A%7C%20eval%20correlation_id%3Dcoalesce(correlation_id%2Ckey.%22_%22.late_time)%20%0A%7C%20rename%20key%20as%20_key%0A%7C%20table%20short_description%20comments%20InstanceName%20category%20subcategory%20contact_type%20assignment_group%20impact%20urgency%20correlation_id%20account%20_key%20location&earliest=-60m%40m&latest=now&display.page.search.tab=statistics&display.general.type=statistics&sid=1704721919.326369_52C57BD9-5296-4397-B370-BF36A375A0A5
  Hi, my employer uses Splunk Enterprise v9.1.2 which is running On-Prem. We have recently enabled SSO with Azure. After enabling SSO we noticed that authentication to the REST API no longer worke... See more...
  Hi, my employer uses Splunk Enterprise v9.1.2 which is running On-Prem. We have recently enabled SSO with Azure. After enabling SSO we noticed that authentication to the REST API no longer worked with PAT tokens or username/password authentication methods. I created an Authentication Extension script using the example SAML_script_azure.py script. I implemented the getUserInfo() function which has allowed users to authenticate to the REST API and CLI commands with PAT tokens. However, I have been unable to make username/password authentication work with the REST API or CLI since I enabled SSO. I tried adding a login() function to my Authentication Extension script but it does not work. The option for "Allow Token Based Authentication Only" is set to false. The login() function is not called when a user sends a request to API with username/password like this example:         curl --location 'https://mysplunkserver.company.com:8089/services/search/jobs?output_mode=json' --header 'Content-Type: text/plain' --data search="search index=main | head 1 " -u me         These are the documentation pages I have been referencing: https://docs.splunk.com/Documentation/Splunk/9.1.2/Security/ConfigureauthextensionsforSAMLtokens  https://docs.splunk.com/Documentation/Splunk/9.1.2/Security/Createtheauthenticationscript    It is possible to use username/password for API and CLI authentication with SSO enabled?
Hello All, I need to fetch the dates in the past 7 days where events are lesser than average event count. I used the below SPL: - |tstats count where index=index_name sourcetype=xxx BY _time span=... See more...
Hello All, I need to fetch the dates in the past 7 days where events are lesser than average event count. I used the below SPL: - |tstats count where index=index_name sourcetype=xxx BY _time span=1d |eventstats avg(count) AS avg_count However, in scenario where on a particular day no events are ingested, the result skips those dates, that is does not return the dates with event count as zero. For example: It skips showing the highlighted rows in the below table: - _time count avg_count 2024-01-01 0 240 2024-01-02 240 240 2024-01-03 0 240 2024-01-04 0 240 2024-01-05 240 240 2024-01-06 240 240 2024-01-07 0 240   And gives below as the result: - _time count event_count 2024-01-02 240 240 2024-01-05 240 240 2024-01-06 240 240   Thus, need your guidance to resolve this problem. Thanking you Taruchit
Hi, Were currently deploying our internal Splunk instance and were looking for a way to monitoring the data sources that have logged in our instance. I saw that previously there was a Data Summary... See more...
Hi, Were currently deploying our internal Splunk instance and were looking for a way to monitoring the data sources that have logged in our instance. I saw that previously there was a Data Summary button on the Search and Reporting app but for some reason its not showing up on our instance. Does anyone know if it got removed or moved to somewhere else. Thank you
Hi, Splunk hasn't captured the 4743 events, indicating computer account deletions that occurred yesterday at 2 pm. Where should we investigate to determine the root cause? Thanks
Hi Team, I was looking to configure the custom command execution like getting the output of ps -ef command or the mq query count. Can some one please help on how to create a monitoring for the same... See more...
Hi Team, I was looking to configure the custom command execution like getting the output of ps -ef command or the mq query count. Can some one please help on how to create a monitoring for the same. The command which i want to configure are the normal Linux commands which are executed on the server using putty like "ps -ef | grep -i otel" and others
Hello, Splunkers!   In the current test environment, System A functions as the master license server, while System B operates as the slave utilizing System A's license. Unfortunately, Sy... See more...
Hello, Splunkers!   In the current test environment, System A functions as the master license server, while System B operates as the slave utilizing System A's license. Unfortunately, System B lacks direct access to System A's master server. When executing the query below on System B, it yields the message "license usage logging not available for slave licensing instances" in the search results: spl index=_internal source="*license_usage.log"   Is there a method to check licensing usage per index through a search query on the Indexer server in System B? Your assistance is greatly appreciated! Thank you in advance.  
We are still running Enterprise 6.1, and I am unable to locate the relevant documentation. I would like to know if I can access the job.sid using simple xml, and if so what the syntax might be. I g... See more...
We are still running Enterprise 6.1, and I am unable to locate the relevant documentation. I would like to know if I can access the job.sid using simple xml, and if so what the syntax might be. I gather I am restricted to the <searchString> element as <search> & <query> are not relevant to version 6.1, Any assistance would be most appreciated.
Hello Splunkers, I have an Architecture related question if someone can help with it please. My Architecture is like , Log Source(Linux Server)> Heavy Forwarder>Indexer  Lets say I'm on-boarding a... See more...
Hello Splunkers, I have an Architecture related question if someone can help with it please. My Architecture is like , Log Source(Linux Server)> Heavy Forwarder>Indexer  Lets say I'm on-boarding a New log source, When I'm installing an UF on my Linux server , it connects back to my Deployment Server and get the APP(Linux TA) and the output.conf APP which is basically my Heavy Forwarder details. Now my question is Do I need to have the same Linux_TA installed on my Heavy Forwarder And Indexer too ? Or as long as this TA is on Log source, it is sufficient. Hope I have explained well. Thanks for looking into this and I greatly appreciate your input. regards, Moh.   
My teacher gave me this task: "You need to apply at least 3 different use cases that we will change according to your scenario. Show various use cases on the Dashboards you create. You can refer to... See more...
My teacher gave me this task: "You need to apply at least 3 different use cases that we will change according to your scenario. Show various use cases on the Dashboards you create. You can refer to sample use cases on the Internet or in the Security Essentials application on Splunk." but I don't know how to do this task. He gives us an empty Splunk Server and this task. How can I create a use-case scenario? Thank you for your time...
Dear Team, How to implement AppDynamics on an application that is running on Google Cloud? Regards, nokawif
Hi Folks, I wanted to restore a chunk of a data (jan 2023-aug 2023) from a specific index, we do use splunk cloud and use splunk's restore services. total size of data from jan to aug: >1700GB our... See more...
Hi Folks, I wanted to restore a chunk of a data (jan 2023-aug 2023) from a specific index, we do use splunk cloud and use splunk's restore services. total size of data from jan to aug: >1700GB our licensee : 800 GB per day will splunk reindex those data?? should I do in chunk?? I'm aware of the limitation of 10% of total archive (I'm very new to splunk tough,So correct me.)WHAT WOULD BE WAY TO GO? 
Dear All, To create the below table for the Notable dashboard in  ES, can you please advise. Thanks    User1  User1  User2 User2 Splunk Search Pending Closed  Pending Closed  Rule ... See more...
Dear All, To create the below table for the Notable dashboard in  ES, can you please advise. Thanks    User1  User1  User2 User2 Splunk Search Pending Closed  Pending Closed  Rule 1         Rule 2        
how should I merge this 2 query into 1: query 1) index="XXXX" source="XXXX"|search "SupplierRTI_AlphaAesar" |stats count AS "Total",count(eval(STATUS=="fail")) AS Failure|eval Faliurerate=(Failur... See more...
how should I merge this 2 query into 1: query 1) index="XXXX" source="XXXX"|search "SupplierRTI_AlphaAesar" |stats count AS "Total",count(eval(STATUS=="fail")) AS Failure|eval Faliurerate=(Failure/Total)*100|eval SuccessRate=100-Faliurerate|table Total,SuccessRate query 2) index="webmethods_prd" source="/apps/WebMethods/IntegrationServer/instances/default/logs/SupplierRTI.log"|search "SupplierRTI_AlphaAesar" | timechart span=1w count I want a report like this how should I form the query?    
I am subscribed to a 3rd party threat intelligence called Group-IB.  I have the Group-IBapp for splunk installed on my search head.  My question is in regards to tuning as I have done very little to... See more...
I am subscribed to a 3rd party threat intelligence called Group-IB.  I have the Group-IBapp for splunk installed on my search head.  My question is in regards to tuning as I have done very little to none. Should I expect that the threat intelligence that is streaming in is being ran against the events in my environment automatically? Assuming the threat intelligence is CIM compliant, should I expect that my Enterprise Security will make a notable event if there is a match?
Hi, When trying to download the enterprise security app, I'm getting the following comment: "This app restricts downloads to a defined list of users. Your user profile was not found in the list of ... See more...
Hi, When trying to download the enterprise security app, I'm getting the following comment: "This app restricts downloads to a defined list of users. Your user profile was not found in the list of authorized users..." What can I do to download it?
After modifying the Controller's certificate and creating a new one, then after I tried to start the Controller again, but it did not start, nor did the event service. ERROR [2024-01-06 14:15:56,85... See more...
After modifying the Controller's certificate and creating a new one, then after I tried to start the Controller again, but it did not start, nor did the event service. ERROR [2024-01-06 14:15:56,850] com.appdynamics.orcha.extensions.es.health.StoreNodeHealth: Failed to connect to welcomeb/2001:0:2851:782c:1451:2eab:3b33:ff4d:9080 ERROR [2024-01-06 14:16:00,870] com.appdynamics.orcha.extensions.es.health.StoreNodeHealth: Connection failure while checking for number of data nodes on the host: welcomeB ERROR [2024-01-06 14:16:02,866] com.appdynamics.platformadmin.core.job.JobProcessor: Platform/Job [1/a6631ee1-e56a-4689-8d9a-4da5127b1e01]: Stage [es_cluster_health_stage] failed due to [Unable to check health of Events Service hosts [welcomeB] through port 9080.] INFO [2024-01-06 14:16:12,208] com.appdynamics.platformadmin.resources.VersionResource: Found Enterprise Console version 23.9.0-10017, 14:17:24,850] com.appdynamics.platformadmin.es.job.stages.ESClusterHealthCheckStage: Failed to reach URL [http://welcomeB:9080/v1/store/report] ! java.net.ConnectException: Connection refused: connect ! at 14:17:38,858] com.appdynamics.platformadmin.es.job.stages.ESClusterHealthCheckStage: Failed to reach URL [http://welcomeB:9080/v1/store/report] ! java.net.ConnectException: Connection refused: connect ! at 4da5127b1e01]: Stage [es_cluster_health_stage] failed due to [Unable to check health of Events Service hosts [welcomeB] through port 9080.] INFO [2024-01-06 14:18:12,207] com.appdynamics.platformadmin.resources.VersionResource: Found Enterprise Console version 23.9.0-10017, build INFO [2024-01-06 14:18:43,400] com.appdynamics.orcha.modules.modules.UriExec: Sending request to: http://localhost:8090/controller/rest/serverstatus INFO [2024-01-06 14:18:56,827] com.appdynamics.orcha.extensions.es.health.StoreNodeHealth: Connection failure while checking node health status on the host: welcomeB ERROR [2024-01-06 14:18:56,827] com.appdynamics.orcha.extensions.es.health.StoreNodeHealth: Failed to connect to welcomeb/2001:0:2851:782c:1451:2eab:3b33:ff4d:9080 ERROR [2024-01-06 14:18:56,839] com.appdynamics.platformadmin.es.job.stages.ESClusterHealthCheckStage: Failed to reach URL [http://welcomeB:9080/v1/store/report] ! java.net.ConnectException: Connection refused: connect ! at ES
Hello, After upgrading from 8.2 to 9.1 I noticed a change in the nav bar affecting most of the custom apps. On the right end of the nav bar, where the app logo (file appIcon*.png from the <appnam... See more...
Hello, After upgrading from 8.2 to 9.1 I noticed a change in the nav bar affecting most of the custom apps. On the right end of the nav bar, where the app logo (file appIcon*.png from the <appname>/static folder) is displayed, the app label (which is configured in app.conf as "label" in the [ui] section) is simply not showing. Strangely enough, for some applications, like "Search & Reporting", the text label is still appearing. But for the majority of the 3rd party apps from the splunkbase, and also for my own custom apps, the label is not showing at all. (For the record: the logo icon is showing, but the text label is not) This is very annoying. After some investigation, it seems that it is NOT an issue of some CSS styling. Because according to the Web Inspector in a browser, the html "span" element that should hold the app label, is NOT populated with the value configured in app.conf/[ui]/label. The "span" element is just empty Why is that ? Regards, mr