All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

My kvstores were empty, but ideally one would try to search the kvstore in order to verify that it works. Another way to verify is to check the monitoring console > Search > KV Store: Instance. If ... See more...
My kvstores were empty, but ideally one would try to search the kvstore in order to verify that it works. Another way to verify is to check the monitoring console > Search > KV Store: Instance. If you can see panels, the kvstore is working! However, if the page is just white, it is not working    
@livehybrid  Additionally, log samples: 2025-04-08 13:56:19,927 INFO pid=426325 tid=MainThread file=base_modinput.py:log_info:295 | Retrieving subscribed pulses since: 2025-04-08 10:54:39.948582 20... See more...
@livehybrid  Additionally, log samples: 2025-04-08 13:56:19,927 INFO pid=426325 tid=MainThread file=base_modinput.py:log_info:295 | Retrieving subscribed pulses since: 2025-04-08 10:54:39.948582 2025-04-08 13:56:19,927 INFO pid=426325 tid=MainThread file=setup_util.py:log_info:117 | Proxy is not enabled! 2025-04-08 13:56:20,146 INFO pid=426325 tid=MainThread file=base_modinput.py:log_info:295 | Completed polling. Logged 0 pulses and 0 indicators. 2025-04-08 13:58:00,005 INFO pid=426392 tid=MainThread file=setup_util.py:log_info:117 | Log level is not set, use default INFO 2025-04-08 13:58:00,006 INFO pid=426392 tid=MainThread file=splunk_rest_client.py:_request_handler:99 | Use HTTP connection pooling 2025-04-08 13:58:00,038 INFO pid=426392 tid=MainThread file=base_modinput.py:log_info:295 | Retrieving subscribed pulses since: 2025-04-08 10:56:19.897881 2025-04-08 13:58:00,039 INFO pid=426392 tid=MainThread file=setup_util.py:log_info:117 | Proxy is not enabled! 2025-04-08 13:58:00,268 INFO pid=426392 tid=MainThread file=base_modinput.py:log_info:295 | Completed polling. Logged 0 pulses and 0 indicators.
Had the very same issue. Are you using custom certificates? In that case, you might have had the same issue as me. Wrote about our the solution here: https://community.splunk.com/t5/Deployment-... See more...
Had the very same issue. Are you using custom certificates? In that case, you might have had the same issue as me. Wrote about our the solution here: https://community.splunk.com/t5/Deployment-Architecture/KVStore-does-not-start-when-running-Splunk-9-4-WITH-A-SOLUTION/m-p/743791#M29350 Note that splunk themselves says that custom certificates are not supported, but we got it to work
Hi @livehybrid  curl request returns results but no data can be pulled to the OTX index. Additionally, according to your experience, which server would be more beneficial to install the OTX applica... See more...
Hi @livehybrid  curl request returns results but no data can be pulled to the OTX index. Additionally, according to your experience, which server would be more beneficial to install the OTX application on the Splunk Cluster side? (Ex: master, deploy, forwarder, indexer, etc.)
After completing the upgrade from Splunk Enterprise version 9.3.3 to v9.4 the KVstore will no longer start. Splunk has yet to do the KVstore upgrade to v7 as the KVstore cannot start. We were already... See more...
After completing the upgrade from Splunk Enterprise version 9.3.3 to v9.4 the KVstore will no longer start. Splunk has yet to do the KVstore upgrade to v7 as the KVstore cannot start. We were already on 4.2 wiredtiger. The problem we had, was our custom certificates did not have the proper extendedUsages set. When we signed the certificates with extendedKeyUsage = serverAuth, clientAuth and restarted Splunk, the kvstore started, upgraded automatically and is running. It even works on search head clusters. Note, the splunk documentation says that custom certificates are not working. But we've made it work Here is the particular doc: https://docs.splunk.com/Documentation/Splunk/9.4.1/Admin/MigrateKVstore#Check_your_deployment I am in the process of creating a supportcase with them.    Yay! Here is how I figured out the issue: Let's start the troubleshooting. index=_internal log_level IN (warn, error) | chart count by component useother=false Saw a lot of errors in components 'mongoclient' and 'KVstorageProvider'   Searching these components index=_internal log_level IN (warn, error) component IN (KVStorageProvider, MongoClient) 04-08-2025 14:55:03.784 +0200 ERROR KVStorageProvider [37886 KVStoreUpgradeStartupThread] - An error occurred during the last operation ('replSetGetStatus', domain: '15', code: '13053'): No suitable servers found (`serverSelectionTryOnce` set): [connection closed calling hello on '127.0.0.1:8191'] 04-08-2025 14:55:04.370 +0200 WARN MongoClient [54380 KVStoreUpgradeStartupThread] - Disabling TLS hostname validation for localhost Not very useful log messages. However, we can search the mongod.log as well index=_internal source="/opt/splunk/var/log/splunk/mongod.log" On my search head cluster peers, they had a very specific error in the field attr.error.errmsg: (THIS will not show up on other splunk servers, but AS YOU WILL SEE, THIS IS THE ISSUE) SSL peer certificate validation failed: unsupported certificate purpose   In this particular environment, we use custom certificates. And to check what usages was allowed with my certificates, i ran the following command: openssl x509 -in <path of my certificate> -noout -purpose Notice that SSL server is Yes, whereas SSL client is No. Meaning this certificate is not able to be used for client authentication. GOTCHA!!! So you need to create a new signing request, with an extendedKeyUsage extendedKeyUsage = serverAuth, clientAuth However, it is up to the signer to actually respect this request. So I would double check after the CSR has been signed, that it has the correct extended purpose. After pushing the new certificate to the server, and restarting Splunk, the kvstore automatically upgraded, and started after ~5 minutes. I verified using this command: /opt/splunk/bin/splunk show kvstore-status --verbose Notice the serverVersion and uptime. Good luck with the goddamn certificates. That was the solution for us
Hi Place the third-party Python library inside your app's lib directory, then add this path to sys.path at the start of your scripted input. Example: import sys, os sys.path.insert(0, os.path.join... See more...
Hi Place the third-party Python library inside your app's lib directory, then add this path to sys.path at the start of your scripted input. Example: import sys, os sys.path.insert(0, os.path.join(os.path.dirname(__file__), '../lib')) import cryptography   Copy the cryptography package (and its dependencies) into $SPLUNK_HOME/etc/apps/myapp/lib/. Use pip on a compatible system: pip install --target=$SPLUNK_HOME/etc/apps/myapp/lib/cryptography Ensure the Python version used to build the packages matches Splunk's embedded Python (e.g., Python 3.7 or 3.9) on your HF. You can use the bin directory instead of lib if you prefer, but I've always been advised to use the lib directory. It shouldnt make much difference though. Check out this page for more info too: https://docs.splunk.com/Documentation/Splunk/9.4.1/Python3Migration/PythonDevelopment Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @SN1  Just to check - are you referring to disk usage rather than memory (RAM) usage? If so, you can access this in the _introspection endpoint to get changes over time rather than just the curr... See more...
Hi @SN1  Just to check - are you referring to disk usage rather than memory (RAM) usage? If so, you can access this in the _introspection endpoint to get changes over time rather than just the current value using: index="_introspection" sourcetype=splunk_disk_objects host=macdev | rename data.* as * | timechart latest(available) as available, latest(capacity) as capacity, latest(free) as free by mount_point You can also use the _metrics index with mstats: | mstats latest(spl.intr.disk_objects.Partitions.data.*) AS * WHERE index=_metrics sourcetype=splunk_intro_disk_objects component=Partitions by data.mount_point  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @tolgaakkapulu, Are there any other events in ta_otx_otx.log file? Having checked the python code for the addon, it doesnt look like there is much in terms of logging so I wouldnt expect there t... See more...
Hi @tolgaakkapulu, Are there any other events in ta_otx_otx.log file? Having checked the python code for the addon, it doesnt look like there is much in terms of logging so I wouldnt expect there to be much.  Are you able to confirm that the X-OTX-API-KEY you entered is correct? Also, did you specify a backfill days value for the input? If not then I think it will only report pulses sine you setup the Splunk input.  Note - Changing the backfill *after* creating the input might not take effect because the checkpoint generated by the input uses the input stanza name, therefore you would need to create an input with a new name if you want to try this. If you still have no joy then please try the following: curl -X GET "https://otx.alienvault.com/api/v1/pulses/subscribed?modified_since=1743940560" -H "X-OTX-API-KEY: <api_key>" Updating the <api_key> with your OTX api key.   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @tolgaakkapulu  Are there any other events in ta_otx_otx.log file? Having checked the python code for the addon, it doesnt look like there is much in terms of logging so I wouldnt expect there t... See more...
Hi @tolgaakkapulu  Are there any other events in ta_otx_otx.log file? Having checked the python code for the addon, it doesnt look like there is much in terms of logging so I wouldnt expect there to be much.  Are you able to confirm that the X-OTX-API-KEY you entered is correct? Also, did you specify a backfill days value for the input? If not then I think it will only report pulses sine you setup the Splunk input.  Note - Changing the backfill *after* creating the input might not take effect because the checkpoint generated by the input uses the input stanza name, therefore you would need to create an input with a new name if you want to try this. If you still have no joy then please try the following: curl -X GET "https://otx.alienvault.com/api/v1/pulses/subscribed?modified_since=1743940560" -H "X-OTX-API-KEY: <api_key>" Updating the <api_key> with your OTX api key.   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi, We're setting up a Splunk enterprise instance in an air-gapped environment.  In addition to this, the server is situated behind a diode, making all traffic one-way. I've gotten the Splunk Univ... See more...
Hi, We're setting up a Splunk enterprise instance in an air-gapped environment.  In addition to this, the server is situated behind a diode, making all traffic one-way. I've gotten the Splunk Universal Forwarders to send logs over http to the HEC, but the diode doesn't support chunked http-encoding. It isn't possible to turn off http 1.1-support in the diode.  In the server, there's the option "forceHttp10", but since the client and server doesn't negotiate the http-version it has no effect. Is there an option in the UF to turn off http 1.1 or chunking for httpout?   TIA Johan
Hi @secure  You could try the following which I think should do what you need? | where NOT (os_type="solaris" OR like(os_type,"%suse%")) OR os_version>=12   The match is used due to the wildcards... See more...
Hi @secure  You could try the following which I think should do what you need? | where NOT (os_type="solaris" OR like(os_type,"%suse%")) OR os_version>=12   The match is used due to the wildcards, so in this where statement we are excluding solaris/*suse* unless the os_version is greater/equal to 12 (Which is functionally the same as excluding less than version 12).  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi @richgalloway thanks, how do you add library to custom app? In bin folder with .py file? 
@tolgaakkapulu  Based on the log entry you've provided, it appears that the OTX technical add-on for Splunk successfully ran a polling operation, but didn't find any new data to ingest.
| search os_version>=12 OR NOT (os_type="solaris" OR os_type="suse")
Use addtotals, the eval to subtract the first column?
@kiran_panchavat Thank you for your feedback. Yes, it was created. When internal logs are examined, the following logs are taken from the $SPLUNK_HOME/var/log/splunk/ta_otx_otx.log file. Is it unabl... See more...
@kiran_panchavat Thank you for your feedback. Yes, it was created. When internal logs are examined, the following logs are taken from the $SPLUNK_HOME/var/log/splunk/ta_otx_otx.log file. Is it unable to pull data or could there be a different situation? 2025-04-08 15:08:38,918 INFO pid=433448 tid=MainThread file=base_modinput.py:log_info:295 | Completed polling. Logged 0 pulses and 0 indicators.
@SN1  introspection index is intended to collect information about your systems running Splunk and give you more data to help diagnose Splunk performance issues. There are some details about what da... See more...
@SN1  introspection index is intended to collect information about your systems running Splunk and give you more data to help diagnose Splunk performance issues. There are some details about what data is collected at About Splunk Enterprise platform instrumentation - Splunk Documentation   For example, If you want to search CPU and memory utilization per search execution with relevant information like which used executed and more. index=_introspection host=* source=*/resource_usage.log* component=PerProcess data.process_type="search" | stats latest(data.pct_cpu) AS resource_usage_cpu latest(data.mem_used) AS resource_usage_mem by data.pid, _time, data.search_props.type,data.search_props.mode, data.search_props.role,data.search_props.user, data.search_props.app, data.search_props.sid You may be able to find some useful information in the What does platform instrumentation log? - Splunk Documentation or the Introspection endpoint descriptions - Splunk Documentation
Don't add to or touch the Python libraries that ship with Splunk as they will be replaced with each upgrade.  Put the required libraries (that Splunk doesn't provide) in your app.  This is the way.
@tolgaakkapulu  Please verify whether the `OTX` index has been created on both the indexers and the heavy forwarder. If it hasn't been created, kindly proceed to create it. In some cases, data may b... See more...
@tolgaakkapulu  Please verify whether the `OTX` index has been created on both the indexers and the heavy forwarder. If it hasn't been created, kindly proceed to create it. In some cases, data may be successfully fetched, but if the index doesn't exist, the events will be discarded. Create the index on the Heavy Forwarder and also on the Indexer, if not already created. If you're using a single standalone Splunk instance, create the index only on that instance. To verify if the OTX Add-on is functioning correctly, check the internal logs by running the following search on the Search Head: index=_internal *otx*
Hello, After completing all the installation steps and integration with the Key on the Alien Vault OTX side in the Forwarder Splunk interface, I see that the index=otx query result is empty. I could... See more...
Hello, After completing all the installation steps and integration with the Key on the Alien Vault OTX side in the Forwarder Splunk interface, I see that the index=otx query result is empty. I could not find any errors. What could be the reasons for the OTX index being empty? Can you help me with this?