All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hey mates, I'm new to Splunk and while ingesting the data from my local machine to Splunk this message shows up. "The TCP output processor has paused the data flow. Forwarding to host_dest=192.XXX.X... See more...
Hey mates, I'm new to Splunk and while ingesting the data from my local machine to Splunk this message shows up. "The TCP output processor has paused the data flow. Forwarding to host_dest=192.XXX.X.XX inside output group default-auto lb-group from host_src=MRNOOXX has been blocked for blocked_seconds=10. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data." Kindly help me. Thank you 
Hi There, We've a standalone Splunk instance v8.2.2.1 deployed on a  RHEL server which is EOL; we wish to migrate to a newer OS Amazon Linux (AL) 2023 OS-- rather than  performing an in-place upgrad... See more...
Hi There, We've a standalone Splunk instance v8.2.2.1 deployed on a  RHEL server which is EOL; we wish to migrate to a newer OS Amazon Linux (AL) 2023 OS-- rather than  performing an in-place upgrade. Instead of using the most recent version of Splunk enterprise, we still wish to adopt a more conservative approach and choose 9.0.x (we've UFs that are older version 7.x and 8.x). Please let me know where can i download 9.0.x version of Splunk enterprise as it's not here: https://www.splunk.com/en_us/download/previous-releases.html   Thanks!
Hello, Can someone confirm if this is official app by microsoft or a third party created app? I want to integrate azure waf logs into my splunk indexer.   Thanks and Regards, satyam
Hello I have a table in dashboard studio and i want to show a part of the json field which contains sub objects when running this  query : index="stg_observability_s" AdditionalData.testName=* so... See more...
Hello I have a table in dashboard studio and i want to show a part of the json field which contains sub objects when running this  query : index="stg_observability_s" AdditionalData.testName=* sourcetype=SplunkQuality AdditionalData.domain="*" AdditionalData.pipelineName="*" AdditionalData.buildId="15757128291" AdditionalData.team="*" testCategories="*" AdditionalData.status="*" AdditionalData.isFinalResult="*" AdditionalData.fullName="***" | search AdditionalData.testLog.logs{}=* | spath path="AdditionalData.testLog.logs{}" output=logs | table logs the json looks flatten , i dont see the sub objects inside is there a way to fix it ?  thanks 
Hi everyone, We already have a Splunk Cloud environment, and on-premises we have a Splunk deployment server. However, the on-prem deployment server currently has no license — it's only used to manag... See more...
Hi everyone, We already have a Splunk Cloud environment, and on-premises we have a Splunk deployment server. However, the on-prem deployment server currently has no license — it's only used to manage forwarders and isn’t indexing any data. We now have some legacy logs stored locally that we’d like to search through without ingesting new data. For this, we’re looking to get a Splunk 0 MB license (search-only) on the deployment server. Is there any way to request or generate a 0 MB license for this use case? Thanks in advance for your help!
Hi Splunk Community, I'm currently integrating Flowmon ndr as a NetFlow data exporter to Splunk Stream, but I’m encountering a persistent issue where Splunk receives the flow data, yet it’s not deco... See more...
Hi Splunk Community, I'm currently integrating Flowmon ndr as a NetFlow data exporter to Splunk Stream, but I’m encountering a persistent issue where Splunk receives the flow data, yet it’s not decoded properly, and flow sets are being dropped due to missing templates. Here’s the warning from the Splunk log: ``` 2025-06-21 08:34:49 WARN [139703701448448] (NetflowManager/NetflowDecoder.cpp:1282) stream.NetflowReceiver - NetFlowDecoder::decodeFlow Unable to decode flow set data. No template with id 258 received for observation domain id 13000 from device 10.x.x.x. Dropping flow data set of size 328 ``` Setup details: Exporter: Flowmon Collector: Splunk Stream  Protocol: NetFlow v9 (also tested with IPFIX) Transport: UDP  Template Resend Configuration: Every 4096 packets or  600 seconds Despite verifying these settings on Flowmon, Splunk continues to report that the template ID (in this case, 258) was never received, causing all related flows to be dropped. My questions: 1. Has anyone successfully integrated Flowmon with Splunk Stream using NetFlow v9? 2. Is there a known issue with Splunk Stream not handling templates properly from certain exporters? 3. Are there any recommended Splunk Stream configuration tweaks for handling late or infrequent templates? Any insights, experiences, or troubleshooting tips would be greatly appreciated. Thanks in advance!
I am looking for away to join results from two indexes based on the hostname. The main index has the hostname as just name and the second index has it by just name.domain.com. The fields are spec.nam... See more...
I am looking for away to join results from two indexes based on the hostname. The main index has the hostname as just name and the second index has it by just name.domain.com. The fields are spec.name and block. I tried to wild card it, but the results were erratic. index=infrastructure_reports source=nutanix_vm_host_report | fields spec.name spec.cluster_reference.name spec.resources.memory_size_mib |rename spec.name as block |join block* [ search index=syslog_main source=/var/log/messages sourcetype=linux_messages_syslog path=/tmp/jira_assets_extract.ini | fields block app_stack operating_system] | table block spec.cluster spec.resources.memory_size_mib operating_system  
We have recently implemented HF in our environment as a part of ingesting akamai logs to Splunk. Installed akamai add-on on HF and forwarding the logs to indexers. The thing is data is more in akamai... See more...
We have recently implemented HF in our environment as a part of ingesting akamai logs to Splunk. Installed akamai add-on on HF and forwarding the logs to indexers. The thing is data is more in akamai (30k events in last 5 minutes). Today our HF GUI is very slow and not at all loading. Tried to restart but still the same. But data ingestion is still going on (checked in SH). Not sure what caused HF not to load. Splunkd is still running backend. web.conf also seems fine. Checked with Splunk support and they checked diag file and it seems fine.    Below is one of the error I noticed in splunkd.log:   ERROR ModularInputs \[10639 TcpChannelThread\] - Argument validation for scheme = TA-Akamai-SIEM; killing process, because executing it took too long (over 30000 mse cs.)  
I am trying to fetch metric values of the infra i am monitoring using rest apis, so far all the apis i have tried are only giving metrics metadata not actual value of the metrics. Can someone help... See more...
I am trying to fetch metric values of the infra i am monitoring using rest apis, so far all the apis i have tried are only giving metrics metadata not actual value of the metrics. Can someone help me with the values api?
Hello ,  Can anyone please provide me a query which lists out  all forwarders that have not send data over the last 30 days?   Thank you
Hi, I am using mcollect to collect data from certain metrics into another metric index. I have created the new metric index in the search head and also in the indexer clusters. The command looks so... See more...
Hi, I am using mcollect to collect data from certain metrics into another metric index. I have created the new metric index in the search head and also in the indexer clusters. The command looks something like this, but whenever I run the command, I get an error 'No results to summary index'.  | mpreview index=metrics_old target_per_timeseries=5 filter="metric_name IN ( process.java.gc.collections) env IN (server_name:port)" | mcollect index=metrics_new  Is there something I'm doing wrong when using the mcollect command? Please advise. Thanks in advance.   Regards, Pravin
Hello Team,   We are on Linux and Post upgrade to splunk 9.4.3, KV store is failing.I have followed few recommendations given in the community  for the related issue,but they are not working .Below... See more...
Hello Team,   We are on Linux and Post upgrade to splunk 9.4.3, KV store is failing.I have followed few recommendations given in the community  for the related issue,but they are not working .Below is the mongod.log SSL peer certificate validation failed: self signed certificate in certificate chain 2025-06-20T08:09:03.925Z I NETWORK [conn638] Error receiving request from client: SSLHandshakeFailed: SSL peer certificate validation failed: self signed certificate in certificate chain. Ending connection from 127.0.0.1:54188 (connection id: 638) This error can be bypassed if we add the below stanza  in server.conf, though it is a workaround only. enableSplunkdSSL = false Any other inputs is appreciated.
Hello there! I am currently managing a Splunk Enterprise clustered environment, where I have implemented a scheduled search that runs every 5 minutes to maintain and update two CSV lookup files. Thes... See more...
Hello there! I am currently managing a Splunk Enterprise clustered environment, where I have implemented a scheduled search that runs every 5 minutes to maintain and update two CSV lookup files. These lookup files are currently stored in the designated lookups directory on the Search Head. My objective is to develop a custom application using the Splunk Add-on Builder, which will incorporate a Python script that will be executed on the Heavy Forwarder. This script requires access to the updated lookup data to function properly. However, due to the clustered nature of my environment, directly accessing these CSV files from the filesystem through the script is not an option. Ideally, indexers should also have access to the same lookup data as both SH and HF. Are there any potential methods or best practices for establishing a reliable mechanism to push or synchronize these lookup files from the SH to the HF (and Indexers, if possible)? Perhaps there are some recommended approaches or established methodologies to achieve reliable sync of those lookup files in a clustered environment that I haven’t found?
Hello,   I need to give certain users access to _internal but only allow them to see certain hosts. I planned to do this by adding a new role, giving them access to the index and then limiting them... See more...
Hello,   I need to give certain users access to _internal but only allow them to see certain hosts. I planned to do this by adding a new role, giving them access to the index and then limiting them to the hosts in the search filter section. This works however it restricts access in all other indexes they have access to, to the same search filter even though this isn't inherited and access granted by a separate role.   Is there an easier/better way of doing this?
I was having Live Service Monitoring Dashboard, created in Splunk Cloud using Studio Dashboard(JSON). Is there any possibility to play audio sound if there was any abnormalities in any of the servic... See more...
I was having Live Service Monitoring Dashboard, created in Splunk Cloud using Studio Dashboard(JSON). Is there any possibility to play audio sound if there was any abnormalities in any of the service in Studio Dashboard. If its possible can anyone help on this how to achieve the output.
Hello, I have 2 seperate splunks as below . One is "v1 endpoint" and other is "v2 endpoint" v1 endpoint: index="abc" "usr*" organizationId=xxxx "`DLQuery`DLQuery`POST`" v2 endpoint: index="abc" ... See more...
Hello, I have 2 seperate splunks as below . One is "v1 endpoint" and other is "v2 endpoint" v1 endpoint: index="abc" "usr*" organizationId=xxxx "`DLQuery`DLQuery`POST`" v2 endpoint: index="abc" "usr*" organizationId=xxxx "DLQuery" "DLSqlQueryV2" I want to create 1 single splunk which will give me v1, v2 count over a span using timechart function.How do we combine them to achieve the output? Thanks, bmer
Hello We deployed a new Splunk cluster containing a Cluster Manager, 3x SHC members, 6x Indexers. The cluster has hundreds of vCPUs in the SHC and Indexers, but after installing Enterprise Security ... See more...
Hello We deployed a new Splunk cluster containing a Cluster Manager, 3x SHC members, 6x Indexers. The cluster has hundreds of vCPUs in the SHC and Indexers, but after installing Enterprise Security 7.x we are seeing hundreds of skipped searches, specifically: The maximum number of concurrent historical scheduled searches on an instance or cluster reached The maximum number of concurrent auto-summarization searches reached Logs indicate the searches seem to be getting skipped on the CM (which only has 12 CPU cores). We followed the documentation to install ES on a distributed cluster: Install Splunk Enterprise Security in a search head cluster environment | Splunk Docs (We used the CM which is our deployer to push ES to the SHC via shcluster apps folder) Note: some summarization searches are running on the SHC members but the majority seem to be running on the CM. Would appreciate any ideas as this has me stumped!
We have a stand-alone splunk instance in a closed area. We had to roll back the server to a snapshot and now the clients only phone home when we restart the splunk server. I've looked at the splunk l... See more...
We have a stand-alone splunk instance in a closed area. We had to roll back the server to a snapshot and now the clients only phone home when we restart the splunk server. I've looked at the splunk log, phonehome log, checked the outputs.conf. I've run telnet server:8089 and 9997 from the clients and the ports are open listening. Any help would be appreciated.  We are on version 9.3.1 
Hi Splunk Community, We’re currently onboarding SUSE Linux (SLES/OpenSUSE) logs into Splunk Enterprise Security (ES) and would appreciate some input. Specifically, I’m looking to understand: Wha... See more...
Hi Splunk Community, We’re currently onboarding SUSE Linux (SLES/OpenSUSE) logs into Splunk Enterprise Security (ES) and would appreciate some input. Specifically, I’m looking to understand: What log files are most relevant for SUSE Linux when it comes to security-focused use cases in Splunk ES (e.g., authentication, audit, change tracking, endpoint monitoring)? How do SUSE Linux log paths and formats differ from standard Linux distributions like RHEL, CentOS, or Ubuntu? Are there any known configurations or tuning steps required (e.g., for /var/log/secure, auditd, or firewall logs) to ensure Splunk ES use cases are fully supported? If anyone has experience with Splunk ES and SUSE integration, I’d love to hear your recommendations on best practices or common challenges. Thanks in advance!
Hi everyone, What's the value of a token if is not set in an input? An empty string, null() or? I was trying to do something like: | eval user=if(isnull("$user_token$"), user, "$user_token$"), but... See more...
Hi everyone, What's the value of a token if is not set in an input? An empty string, null() or? I was trying to do something like: | eval user=if(isnull("$user_token$"), user, "$user_token$"), but it doesn't work.