All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

We get these messages. For exmaple dbconnect doesn't work anymore... how could i solve this? 03-11-2025 12:09:07.792 +0100 WARN  MongoClient [1244 KVStoreUpgradeStartupThread] - Disabling TLS ... See more...
We get these messages. For exmaple dbconnect doesn't work anymore... how could i solve this? 03-11-2025 12:09:07.792 +0100 WARN  MongoClient [1244 KVStoreUpgradeStartupThread] - Disabling TLS hostname validation for localhost 03-11-2025 12:09:07.843 +0100 INFO  KVStoreConfigurationProvider [1244 KVStoreUpgradeStartupThread] - KVSTore peer=127.0.0.1:8191 replication state=KV store captain. Health state=1 03-11-2025 12:09:07.843 +0100 INFO  MongoUpgradePreChecks [1244 KVStoreUpgradeStartupThread] - Supported Upgrade 3   03-11-2025 12:09:11.773 +0100 ERROR PersistentScript [2200 PersistentScriptIo] - From {"C:\Program Files\Splunk\bin\Python3.9.exe" "C:\Program Files\Splunk\Python-3.9\Lib\site-packages\splunk\persistconn\appserver.py"}:   File "C:\Program Files\Splunk\Python-3.9\lib\logging\handlers.py", line 115, in rotate 03-11-2025 12:09:11.773 +0100 ERROR PersistentScript [2200 PersistentScriptIo] - From {"C:\Program Files\Splunk\bin\Python3.9.exe" "C:\Program Files\Splunk\Python-3.9\Lib\site-packages\splunk\persistconn\appserver.py"}:     os.rename(source, dest)
Hello everyone, I have set up my Splunk server[with receiving port 9997 is enabled] and Splunk forwarder to monitor my UF logs.  Please suggest what i am missing here. but i am getting below when i... See more...
Hello everyone, I have set up my Splunk server[with receiving port 9997 is enabled] and Splunk forwarder to monitor my UF logs.  Please suggest what i am missing here. but i am getting below when i do - ./splunk list forward-server o/p: Active forwards: None Configured but inactive forwards: 52.66.100.58:9997 i have done below steps: my UF:  ./splunk add forward-server 52.66.100.58:9997 outputs.conf [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] disabled = false server = 52.66.100.58:9997 [tcpout-server://52.66.100.58:9997]   Thanks in advance.
Hi @Chakri  Does the following work for you? I havent got Splunk infront of me at the moment to test but I will generate some test data to check shortly. | search hostname=AB100* hostname=*TILL* ho... See more...
Hi @Chakri  Does the following work for you? I havent got Splunk infront of me at the moment to test but I will generate some test data to check shortly. | search hostname=AB100* hostname=*TILL* hostname!=*TILL100 hostname!=*TILL101 hostname!=*TILL102 hostname!=*TILL150 hostname!=*TILL151 This allows hostname=AB100* and then removes those ending with 100,101,102,150,151 Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @gpradeepkumarre  Unfortunately there isnt an endpoint for reloading the SSL on a Splunk2Splunk (S2S) input port, although there seems to be an endpoint for everything else! Including a client *s... See more...
Hi @gpradeepkumarre  Unfortunately there isnt an endpoint for reloading the SSL on a Splunk2Splunk (S2S) input port, although there seems to be an endpoint for everything else! Including a client *sending* to an SSL S2S port, but not the receiving port. From my experience, the best approach is to perform a Splunk restart on the server with an updated SSL on the input port.  Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Below is my search   | inputlookup uf_ssl_kv_lookup | search hostname=AB100*TILL* hostname!=AB100*TILL100 hostname!=AB100*TILL101 hostname!=AB100*TILL102 hostname!=AB100*TILL150 hostname!=AB100*TI... See more...
Below is my search   | inputlookup uf_ssl_kv_lookup | search hostname=AB100*TILL* hostname!=AB100*TILL100 hostname!=AB100*TILL101 hostname!=AB100*TILL102 hostname!=AB100*TILL150 hostname!=AB100*TILL151   When I ran the above search I see below warning, how to avoid the warning.    The term 'hostname!=AB100*TILL100' contains a wildcard in the middle of a word or string. This might cause inconsistent results if the characters that the wildcard represents include punctuation   There are 100's of stores and 1000's of tills. How to modify my search? Note: I can't change the lookup table.   Example hostname=AB1001234TILL1 in hostname WE -- stands for type 100 -- Country Code 1234 - store number TILL1 -- Till number  
Hi @phamanh1652  Splunk Cloud has the same SMTP authentication limitations as Splunk Enterprise. Moving to Splunk Cloud would not solve this particular authentication challenge. Infact, it currently... See more...
Hi @phamanh1652  Splunk Cloud has the same SMTP authentication limitations as Splunk Enterprise. Moving to Splunk Cloud would not solve this particular authentication challenge. Infact, it currently isnt possible to configure your own SMTP server in Splunk Cloud - it cannot be changed. In terms of the app password / token - Unfortunately this is a change by Microsoft which is a non-standard SMTP implementation. Splunk does not currently support this approach. There are a couple of options here, post-September you may need to use a customised Alert Action to send emails for you using the Office365 API, however this will only work for alerts - it wouldnt work for things like automatic PDF emailing etc. Another options is to use an external SMTP service or relay service such as SMTP2Go. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
It can be done (half of ES works this way) but it's ugly. An input is what should work as... well, an input. Not as a vessel to run something that does something completely different. So you have tw... See more...
It can be done (half of ES works this way) but it's ugly. An input is what should work as... well, an input. Not as a vessel to run something that does something completely different. So you have two options (apart from ingesting data into an index) - run a completely external tool - for example with cron - which will fiddle with splunk by API or indeed run a modular input. Both solutions are not very pretty.
Hi @MrLR_02 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi Giuseppe, yep, I currently empty the index for the events using a very short retention time. I think it will stay that way. Other solutions like the KV Store don't really make much sense. Thank... See more...
Hi Giuseppe, yep, I currently empty the index for the events using a very short retention time. I think it will stay that way. Other solutions like the KV Store don't really make much sense. Thanks for the nice exchange. Bye.
Hi @Praz_123 , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Po... See more...
Hi @Praz_123 , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @MrLR_02 , if your problem is the disk space, you cannot use the delete command because it only logically deletes  events not physically. you should use the clean command that deletes all events... See more...
Hi @MrLR_02 , if your problem is the disk space, you cannot use the delete command because it only logically deletes  events not physically. you should use the clean command that deletes all events from the index and deletes also physically. You can solve the space issue, applying a very short retention policy on that index (e.g. 12 hours or less), in this way you physically delete buckets. Ciao. Giuseppe
Wait a second. 1. Transforms are called in a specific order: a) Transform classes are called in an alphabetical order b) Transforms within a single class are called left to right. 2. All transfor... See more...
Wait a second. 1. Transforms are called in a specific order: a) Transform classes are called in an alphabetical order b) Transforms within a single class are called left to right. 2. All transforms are called (that's the important part!) So if you want to keep just part of your data and filter the remaining events out you have to first redirect all events to nullQueue and then match the part of your events you want to keep and send them to indexQueue. So you should to this like this - first transform should send all events matching your "ID":\s*"?32605 to nullQueue. Then you should have a transform sending the "successful" events to indexQueue. BTW, you don't need to use non-capturing groups with your REGEX in a transform.
Issue is fixed, below excluded the events when splunk*.exe is found for 4688 event code. blacklist3 = EventCode="4688" Message=".*(splunk-.*\.exe|splunk\.exe|splunkd\.exe).*"
Depending on your particular setup there might or might not be a way to upgrade the forwarder without data loss. It depends on what inputs you have there and what data you're receiving with them. Fo... See more...
Depending on your particular setup there might or might not be a way to upgrade the forwarder without data loss. It depends on what inputs you have there and what data you're receiving with them. For example - if you have a scripted or modular input which must periodically query some API endpoint for value, when you bring your HF down your API calls won't get spawned and you won't get data for those particular scheduled points in time. And short of having a relatively complicated "quasi-HA" setup on HFs there is no way around it. If you're receiving UDP syslogs on that HF - there is also not much you can do unless you can do some network-level reconfiguration to pass that data into another instance. There might however be some inputs (or sources generating data for those inputs) which might deal with a situation when they do not run continuously - like buffering data on the sending side or - in case of a pull-mode input- reading an accumulated backlog. So there is no general answer. It all depends on your particular setup and data flow.
Even a non-enforcement license blocks when it's past expiry date. Been there, done that On a multi-TB non-enforcement license. Someone missed the date and didn't upload the updated license in time... See more...
Even a non-enforcement license blocks when it's past expiry date. Been there, done that On a multi-TB non-enforcement license. Someone missed the date and didn't upload the updated license in time, we had to call Splunk for the unlock license.
When the license expires (as opposed to violations from exceeding ingestion limits), it locks the searching functionality. As far as I know, there is no automatic way to unlock it. You need to contac... See more...
When the license expires (as opposed to violations from exceeding ingestion limits), it locks the searching functionality. As far as I know, there is no automatic way to unlock it. You need to contact whoever you're buying your Splunk licenses from and ask them for an "unlock license" for you.
This is an interesting case. Firstly, there is an issue of a general syntax. You don't use lookup as a generating command. So your subsearch is wrong. Theroretically, you could do something like |... See more...
This is an interesting case. Firstly, there is an issue of a general syntax. You don't use lookup as a generating command. So your subsearch is wrong. Theroretically, you could do something like | search [ | inputlookup ...] To limit your results only to the ones matching the contents of your lookup file. But your subesearch contains way too many rows (by default subsearch is limited to 10k results). But (again - in a general case; not your particular one since your has additional limitations and we'll get to that shortly) that would be way more efficient if you could limit your initial search, not add that search command further down the pipeline. As your search is built, you have to read all events with /api/update path and then "manually" extract the field form them. It requires reading and processing all /api/update events even if only a small subsearch of them would finally match your search terms. So in general case, it would be better to do something like index=testing_car hostname=*prod* "/api/update" [ | inputlookup Sessions.csv SID | rename SID as search ] Assuming your SID-s are properly tokenized - it would work much faster than putting the search (or lookup, if using the lookup method) further down the pipeline. But this is impossible in your case. Firstly, your lookup is way too big for default limits. Secondly, even if you went and raised those limits, spawning search with 200k search terms... it doesn't seem like a good idea. So now we come to the second part of the puzzle. The lookup-based solution presented by @kiran_panchavat - search for all events, use lookup to "flag" matching events and filter out those not flagged - is generally OK. But there is one pretty big caveat to this. And it comes back to the size of your lookup. A simple csv-based lookup is OK for small lookups. It compares values using linear search through the file so while it can be pretty effective if the lookup file is small (and in best scenario the most often used values would be at the top of the file) I suppose you know where this is going... If you have 200k rows in your lookup, assuming uniform distribution, you'd get 100k comparison per match on average for those values matching your lookup. For those not matching your lookup Splunk would need to do 200k comparisons just to decide the value isn't there! So it quickly gets very very ineffective, especially if the lookup file is big and the hit ratio low. Therefore for bigger lookups you should not use csv as backend but insted create a kvstore collection. But here we hit anoter issue - kvstore runs on SH. You don't run kvstore on indexers. If you have an all-in-one installation that might not be a problem but if you're trying to run this on a distributed environment, your kvstore contents would be replicated as csv lookup to indexers for indexer-tier search operations. That creates a conundrum - generally you want to do as much as you can on the indexer level with streaming commands since this can parallelize your work and use your hardware more efficiently but in this case you'd fall back to the csv-based lookup performance which with this size of lookup would be awful. You can get around it by using the local=t option to the lookup command as @kiran_panchavat did but... this forces the search pipeline to move at the lookup command from the indexers to search head. Which means all intermediate non-filtered values would need to be streamed to the calling SH to be processed and filtered so you lose the advantage of multiple-indexers doing the work in parallel. So however you turn, there is always some obstacle. Either it's slow because it's a huge lookup or it's slow because you have heaps of data to dig through. If this is a one-off job, it just means you have to wait longer for your results but if it's something you will want to repeat regularly you might want to rethink the original issue. Maybe those SIDs follow some pattern and you could partially pre-filter your data? Or maybe you could split that csv into smaller parts? Where does this list come from in the first place? Do you create it from another search in Splunk? If so, then maybe you can add the original search logic to this search. Or maybe instead of creating a lookup you could index your SIDs into a temporary index and use a search combining results from that index with the index you're trying to search from. There might be alternative approaches depending on your underlying problem.
Hi @JohnGregg  Does the endpoint at /controller/analyticsdynamicservice/application_id give you what you need? Check out at https://docs.appdynamics.com/appd/24.x/latest/en/extend-splunk-appdynamic... See more...
Hi @JohnGregg  Does the endpoint at /controller/analyticsdynamicservice/application_id give you what you need? Check out at https://docs.appdynamics.com/appd/24.x/latest/en/extend-splunk-appdynamics/splunk-appdynamics-apis/configuration-import-and-export-api#:~:text=true%2C%22errors%22%3A%5B%5D%2C%22warnings%22%3A%5B%5D%7D-,Export%20Application%20Analytics%C2%A0Dynamic%20Service%20Configuration,-The%20Analytics%20Dynamic Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will  
@pacifiquen  Since your license expired 5 months ago, it’s likely that Splunk entered a state where search functionality was disabled due to license violations or expiration enforcement. Even with a... See more...
@pacifiquen  Since your license expired 5 months ago, it’s likely that Splunk entered a state where search functionality was disabled due to license violations or expiration enforcement. Even with a new license, prior violations (e.g., exceeding the daily indexing limit multiple times before the license expired) could still block search functionality until resolved.   In the Splunk Web UI, go to Settings > Licensing > Usage Report and review the last 30 days (or more if available) for violations.   For Splunk Enterprise (versions 8.1.0+), if you exceeded your license capacity 45+ times in a 60-day period with a stack volume <100 GB, search is disabled until violations clear or a reset license is applied.   If violations are still active (from before the new license), you may need to wait 30 days without violations (for free licenses) or request a reset license from Splunk Support (for Enterprise licenses).   Contact Splunk Support via the Splunk Support Portal or call 866.GET.SPLUNK to request a reset license. Apply it via Settings > Licensing > Add License.   Confirm Data Ingestion   Why: If logs aren’t appearing, the issue might not be the license but rather data not reaching the Search Head. Action: Verify that data is being ingested and indexed. index=* earliest=-24h https://www.splunk.com/en_us/resources/splunk-enterprise-license-enforcement-faq.html 
Hi @mikefg  Please can you run the below SPL and make sure if returns an empty string?  | inputlookup ipapikey | sort - savetime | head 1 | table apikey   Please let me know how you get on an... See more...
Hi @mikefg  Please can you run the below SPL and make sure if returns an empty string?  | inputlookup ipapikey | sort - savetime | head 1 | table apikey   Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will