All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am using Heavy Forwarder to forward log to Splunk cloud, when I installed the splunkclouduf.spl to splunk  autoloadbalancedconnectionstrategy cooked connection to xxx.xxx.xxx.xxx:9997 May I know ... See more...
I am using Heavy Forwarder to forward log to Splunk cloud, when I installed the splunkclouduf.spl to splunk  autoloadbalancedconnectionstrategy cooked connection to xxx.xxx.xxx.xxx:9997 May I know the IP address of Splunk Cloud Indexer and Search head are fixed or changeable?  And for the firewall rule, should I use IP address or DNS to allow the traffic?
Using the Splunk App for SOAR I am creating events in SOAR using a dashboard in Splunk. I'm facing an issue where the same form submission in the dashboard is resulting in multiple artifacts being cr... See more...
Using the Splunk App for SOAR I am creating events in SOAR using a dashboard in Splunk. I'm facing an issue where the same form submission in the dashboard is resulting in multiple artifacts being created in the one event rather than a new event being created for all submissions. Events in Splunk are held for 30 days, this can results in a time sensitive request being requested and run 30 days ago for example, but if it's requested again n those 30 days it won't generate a new event and run the playbook. I could probably add a unique ID to the form submissions which would result in a new container being made (as the artifact values wouldn't be identical) but I was wondering if there's an option in the app or in SOAR to always generate a new container?    Thanks
HI Team, what would be best way to send logs of apps and addon installed on onprem HF and sh to cloud enviroment.
We have implemented the dual ingestion for syslog servers , we can see the logs in cloud but few logs file missing and onprem have all the data  but on cloud we are missing some files which has large... See more...
We have implemented the dual ingestion for syslog servers , we can see the logs in cloud but few logs file missing and onprem have all the data  but on cloud we are missing some files which has large count of logs . PLease help me to understand how we can get the data of those log files in splunk cloud. Splunk cloud logs of syslog server   Syslog server logs on onprem    
Hello, Splunk offers the option of saving changes made in an app via Splunk Web directly to the default directory. By default, Splunk saves all changes made via the Splunk Web interface in the local... See more...
Hello, Splunk offers the option of saving changes made in an app via Splunk Web directly to the default directory. By default, Splunk saves all changes made via the Splunk Web interface in the local directory. Is there a possibility that the changes are saved directly to the default directory? Some more information about the background of the question: For my Splunk instances, the config management is done using Gitlab. All config files in the apps are pushed to the corresponding Splunk instances in the default directory. When I clone an app to my Dev-Splunk instance and make changes, these are saved in the corresponding local directory. Before I can push the changes to my Prod-Splunk instance via Gitlab, I have to manually copy the changes from local/config files to the default/config files. This step is quite tedious as soon as it is not just a single config file. Have any of you already had the same problem and can give me a tip as to whether this is technically possible in Splunk? best regards Lukas
@livehybrid Okay what are the role capabilities one should have if they want to share a dashboard within an app if the user has a role with write permissions within the app.
Dear Team, I am currently running Splunk Enterprise version 9.1.0.1 on a RHEL 7.9 system. I would like to clarify the following: What is the supported version of Splunk Enterprise for RHEL 7.9? Do... See more...
Dear Team, I am currently running Splunk Enterprise version 9.1.0.1 on a RHEL 7.9 system. I would like to clarify the following: What is the supported version of Splunk Enterprise for RHEL 7.9? Does Splunk Enterprise include Heavy Forwarders (HF) and Deployment Servers (DS) by default, or do these components need to be installed separately? Given that I currently have Splunk 9.1.0.1 installed on RHEL 7.9, what would be the recommended version of Splunk Enterprise moving forward? I appreciate your assistance and look forward to your response.
I have events like the following. The filed jobName contains "(W6) Power Quality Read - MT - IR Meters Pascal" delimited with a comma. Splunk is representing the field, jobName as containing "(W6)" t... See more...
I have events like the following. The filed jobName contains "(W6) Power Quality Read - MT - IR Meters Pascal" delimited with a comma. Splunk is representing the field, jobName as containing "(W6)" truncating the remainder of the value. I don't believe it is terminating because of the ") " in the value. Please advise if you have a suggestion.  04/08/2025 17:35:33 runID = 79004968, jobID=72212875, jobName=(W6) Power Quality Read - MT - IR Meters Pascal, jobType=Meter Read Job,status = Failure,started = Tue Apr 08 09:35:13 GMT 2025,finished = Tue Apr 08 10:48:29 GMT 2025,elapsed = 1h 13m 16s ,Process_Index_=0,Write_Index_=0,device_count=625997,imu_device_count=0,devices_in_nicnac=0,members_success=625879,members_failed=118,members_timed_out=0,members_retry_complete=518,devices_not_in_cache=0,nicnac_sent_callback=3144189,nicnac_complete_callback=625879,nicnac_failed_callback=0,nicnac_timeout_callback=518,unresolved_devices=791,process_batch=12555,process_1x1=0,name_resolver_elapsed=384249,process_elapsed_ms=1145247,jdbc_local_elapsed_ms=0,jdbc_net_elapsed_ms=1036711,load_device_elapsed_ms=18697
So the title is pretty self explanatory. I have been approached and requested to trim logs. I had initially installed and tested Cribl, but fast forward later, I am still doing a bit of testing. I am... See more...
So the title is pretty self explanatory. I have been approached and requested to trim logs. I had initially installed and tested Cribl, but fast forward later, I am still doing a bit of testing. I am now also using Data dog, A tool that we already have installed and are already paying for. They approached me with a similar proposition and use cases.  I am having issues sending data to both sources and they both appear. I have been able to get only one tool to work at any given time.  I guess my question is, as a forwarder that forwards all of the data it takes in, can it send all of the data to two different sources to be parsed and passed back in.    Currently, this is what I have.  1 indexer 1 forwarder 1 search Head Currently, I have some data being monitored locally on the forwarder, and it sends it over to the indexer. Currently, any data the forwarder sends needs to be sent to and modified by Cribl and DataDog. I have this as my outputs.conf currently:   When I have the configuration like this, Through the forwarder I get: I guess its too much to send.    For datadog, we are sending data from the Forwarder to the DataDog agent (Installed on the Indexer) over Splunk TCP, and then uses the HEC endpoint as the destination.    I am trying to understand the Indexing Queues on the Forwarder. Currently, I am sending data to the Indexer, and it is searchable through the Search Head, but I do not see any indexing or anything happening on the Forwarder.  How do i read and understand exactly what is happening and where I need to investigate to see what is happening to the data as I send it out. Any docs or assistance is greatly appreciated.    Thank you
Are we able to create a saved filter in Mission Control that can be shared across users? Just like in incident review in ES? 
How can you query an index to find out the data types of the fields and any attributes that describe the field?  from a data governance perspective to document.  Apologies... new to this...  I know t... See more...
How can you query an index to find out the data types of the fields and any attributes that describe the field?  from a data governance perspective to document.  Apologies... new to this...  I know that you can query the index and fields would be listed and see some of the metadata type info but it's not exportable..i  would like to query and export that info.  thx  
Good day. I work in a heavily regulated critical infrastructure environment. Our compliance change management requires us to consider baseline changes in a server system. Do Content Updates alter the... See more...
Good day. I work in a heavily regulated critical infrastructure environment. Our compliance change management requires us to consider baseline changes in a server system. Do Content Updates alter the baseline for the Splunk server onsite? We do not use cloud. Are there hash values that are checked when pushing the updates? We need to have the updates but I can't just open a whole change management process each time this has to be updated. I need assurance that critical infrastructure is considered in ES content updates. Thanks in advance. 
We have a use case where some JSON being ingested into Splunk contains a list of values like this: "message_set": [ { "type": 9 }, { ... See more...
We have a use case where some JSON being ingested into Splunk contains a list of values like this: "message_set": [ { "type": 9 }, { "type": 22 }, { "type": 15 }, ... ], That list has an arbitrary length, so it could contain anywhere from one up to around 30 "type" values. Splunk is parsing the JSON just fine, so these fields can be referenced as "message_info.message_set{}.type" in searches. I'd like to set up an inputlookup that maps these numerical values to more descriptive text. Is there a way to apply an inputlookup across an entire list of arbitrary size like this, or would I need to explicitly add an inputlookup definition for each individual index in the list? I'd ultimately like to add these as LOOKUP settings in the sourcetype for this data so that they're automatically applied for all searches.
After completing the upgrade from Splunk Enterprise version 9.3.3 to v9.4 the KVstore will no longer start. Splunk has yet to do the KVstore upgrade to v7 as the KVstore cannot start. We were already... See more...
After completing the upgrade from Splunk Enterprise version 9.3.3 to v9.4 the KVstore will no longer start. Splunk has yet to do the KVstore upgrade to v7 as the KVstore cannot start. We were already on 4.2 wiredtiger. The problem we had, was our custom certificates did not have the proper extendedUsages set. When we signed the certificates with extendedKeyUsage = serverAuth, clientAuth and restarted Splunk, the kvstore started, upgraded automatically and is running. It even works on search head clusters. Note, the splunk documentation says that custom certificates are not working. But we've made it work Here is the particular doc: https://docs.splunk.com/Documentation/Splunk/9.4.1/Admin/MigrateKVstore#Check_your_deployment I am in the process of creating a supportcase with them.    Yay! Here is how I figured out the issue: Let's start the troubleshooting. index=_internal log_level IN (warn, error) | chart count by component useother=false Saw a lot of errors in components 'mongoclient' and 'KVstorageProvider'   Searching these components index=_internal log_level IN (warn, error) component IN (KVStorageProvider, MongoClient) 04-08-2025 14:55:03.784 +0200 ERROR KVStorageProvider [37886 KVStoreUpgradeStartupThread] - An error occurred during the last operation ('replSetGetStatus', domain: '15', code: '13053'): No suitable servers found (`serverSelectionTryOnce` set): [connection closed calling hello on '127.0.0.1:8191'] 04-08-2025 14:55:04.370 +0200 WARN MongoClient [54380 KVStoreUpgradeStartupThread] - Disabling TLS hostname validation for localhost Not very useful log messages. However, we can search the mongod.log as well index=_internal source="/opt/splunk/var/log/splunk/mongod.log" On my search head cluster peers, they had a very specific error in the field attr.error.errmsg: (THIS will not show up on other splunk servers, but AS YOU WILL SEE, THIS IS THE ISSUE) SSL peer certificate validation failed: unsupported certificate purpose   In this particular environment, we use custom certificates. And to check what usages was allowed with my certificates, i ran the following command: openssl x509 -in <path of my certificate> -noout -purpose Notice that SSL server is Yes, whereas SSL client is No. Meaning this certificate is not able to be used for client authentication. GOTCHA!!! So you need to create a new signing request, with an extendedKeyUsage extendedKeyUsage = serverAuth, clientAuth However, it is up to the signer to actually respect this request. So I would double check after the CSR has been signed, that it has the correct extended purpose. After pushing the new certificate to the server, and restarting Splunk, the kvstore automatically upgraded, and started after ~5 minutes. I verified using this command: /opt/splunk/bin/splunk show kvstore-status --verbose Notice the serverVersion and uptime. Good luck with the goddamn certificates. That was the solution for us
Hi, We're setting up a Splunk enterprise instance in an air-gapped environment.  In addition to this, the server is situated behind a diode, making all traffic one-way. I've gotten the Splunk Univ... See more...
Hi, We're setting up a Splunk enterprise instance in an air-gapped environment.  In addition to this, the server is situated behind a diode, making all traffic one-way. I've gotten the Splunk Universal Forwarders to send logs over http to the HEC, but the diode doesn't support chunked http-encoding. It isn't possible to turn off http 1.1-support in the diode.  In the server, there's the option "forceHttp10", but since the client and server doesn't negotiate the http-version it has no effect. Is there an option in the UF to turn off http 1.1 or chunking for httpout?   TIA Johan
Hello, After completing all the installation steps and integration with the Key on the Alien Vault OTX side in the Forwarder Splunk interface, I see that the index=otx query result is empty. I could... See more...
Hello, After completing all the installation steps and integration with the Key on the Alien Vault OTX side in the Forwarder Splunk interface, I see that the index=otx query result is empty. I could not find any errors. What could be the reasons for the OTX index being empty? Can you help me with this?
this is the search | rest /services/server/status/partitions-space splunk_server=* | eval free = if(isnotnull(available), available, free) | eval usage_TB = round((capacity - free) /1024/1024, 2) ... See more...
this is the search | rest /services/server/status/partitions-space splunk_server=* | eval free = if(isnotnull(available), available, free) | eval usage_TB = round((capacity - free) /1024/1024, 2) | eval free=round(free/1024/1024,2) | eval capacity_TB = round(capacity /1024/1024, 2) | eval pct_usage = round(usage / capacity * 100, 2) | table splunk_server, usage_TB , capacity_TB , free it gives memory usage of splunk servers , can this be implemented using _introspection index as well?
Hello Guys, I have an existing deployment server and I'm reviewing the average network bandwidth of the server. That could help me before migrating the server into a new box. Any thoughts? Thanks!
Hello guys, how to add cryptography or other python lib to Splunk python own environment for scripted input on HF? Preferred solution is to put beside my app in etc/apps/myapp/bin/ folder. Thanks ... See more...
Hello guys, how to add cryptography or other python lib to Splunk python own environment for scripted input on HF? Preferred solution is to put beside my app in etc/apps/myapp/bin/ folder. Thanks for your help!  
i have a query where im generating a table with columns  ostype ,osversion and status  i need to exclude anything below version 12 for solaris and suse im using the below command and it works but th... See more...
i have a query where im generating a table with columns  ostype ,osversion and status  i need to exclude anything below version 12 for solaris and suse im using the below command and it works but this is not efficient way  | search state="Installed" | search NOT( os_type="solaris" AND os_version <12) | search NOT( os_type="*suse*" AND os_version <12) i was trying to use the below command | search state="Installed" NOT (( os_type="solaris" AND os_version <12) OR ( os_type="*suse*" AND os_version <12)) and its not working any suggestions