All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Dears, Hope you are doing well, I would like to request your assistance regarding an issue we've encountered after upgrading Splunk Enterprise from version 9.1.5 to 9.4.0. Since the upgrade, th... See more...
Dears, Hope you are doing well, I would like to request your assistance regarding an issue we've encountered after upgrading Splunk Enterprise from version 9.1.5 to 9.4.0. Since the upgrade, the Forwarder Management (Deployment Server) functionality is no longer working as expected. Despite multiple troubleshooting attempts, the issue persists. I have attached a screenshot showing the specific error encountered. I would greatly appreciate your guidance or recommendations to help resolve this matter. Please let me know if any additional logs or configuration details are needed.     Thank you in advance for your support.
Hi @tech_g706  Do you have custom SSL Certs on your server? Please can you confirm the output of the following which might help us dig down. Thanks $SPLUNK_HOME/bin/splunk cmd btool server list --... See more...
Hi @tech_g706  Do you have custom SSL Certs on your server? Please can you confirm the output of the following which might help us dig down. Thanks $SPLUNK_HOME/bin/splunk cmd btool server list --debug kvstore  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @bigchungusfan55  Have you created the actual collections.conf collection stanza as well as creating the lookup definition? It sounds like either the name in the definition of the lookup (which ... See more...
Hi @bigchungusfan55  Have you created the actual collections.conf collection stanza as well as creating the lookup definition? It sounds like either the name in the definition of the lookup (which is where you match then name you use after outputlookup/inputlookup/lookup) is incorrect, or the collection itself does not exist. Please can you review this and let us know?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I haven't found a fix, but this is how I've been working around it: In the detection search, make sure to call addinfo . Then, you can still use info_min/max_time to filter. You just have to do the... See more...
I haven't found a fix, but this is how I've been working around it: In the detection search, make sure to call addinfo . Then, you can still use info_min/max_time to filter. You just have to do the filtering yourself. Examples: index=StuffYouWant starttimeu=$info_min_time$ endtimeu=$info_max_time$ | ...   | from datamodel:"Authentication"."Failed_Authentication" | search  _time>$info_min_time$ _time<$info_max_time$ ...
Hi @dbloms  What env variables and/or configs are you passing through to this container?  Thanks Will
Hi @pc1  On your host with the inputs configured, do you see anything in $SPLUNK_HOME/var/log/splunk/splunkd.log relating to this input not running? Or is there a filename in the $SPLUNK_HOME/var/lo... See more...
Hi @pc1  On your host with the inputs configured, do you see anything in $SPLUNK_HOME/var/log/splunk/splunkd.log relating to this input not running? Or is there a filename in the $SPLUNK_HOME/var/log/splunk/ relating to the app? What does this output when the modular input tries to run.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
If this is an accurate representation of your data, I agree with @PickleRick it is bad. You have 3 open braces and 2 close braces. Go back to you developers and ask them to redevelop the application... See more...
If this is an accurate representation of your data, I agree with @PickleRick it is bad. You have 3 open braces and 2 close braces. Go back to you developers and ask them to redevelop the application producing these logs so that they are in a more reasonable format to process. If this is not an accurate representation of your data, please provide something which accurately represents the data you are dealing with so we have a chance at suggesting something which might help you.
Ok. This is bad. This is ugly. If all your events look like this you have a completely unnecessary header which just wastes space (and your license) and then you have an escaped payload which you ha... See more...
Ok. This is bad. This is ugly. If all your events look like this you have a completely unnecessary header which just wastes space (and your license) and then you have an escaped payload which you have to unescape to be able to do anything reasonable with. Get rid of that header, ingest your messages as well-formed json and your life will be much, much easier. In this form... it's hard to do anything about extracting fields in the first place since it's "kinda structured" data so you can't just handle it with regexes. You could try to unescape it by simple substitution but be aware that depending on your data you might hit some unexpected strings which will not unescape properly. Having unescaped json you can parse it with spath but it will definitely not be a very fast solution.
If you want to modify displayed time so that whenever you're searching for the event you're being shown current time, you have to do it in search time. <your_search> | eval _time=now() Question is... See more...
If you want to modify displayed time so that whenever you're searching for the event you're being shown current time, you have to do it in search time. <your_search> | eval _time=now() Question is why would you do that. Time is one of the main and most important metadata about the event. And it has nothing to do with DATETIME_CONFIG - that setting only works during event ingestion. It modifies what timestamp will be assigned to the event. But each event when it's indexed gets its own timestamp and you can't modify the indexed timestamp. You can only "cheat" during searching by overwriting the value as I've shown above.
Did you put <collection> in a collections.conf file, distribute it to all SHs, and restart Splunk?  Make sure the collections.conf file defines each field you want to use.
"MESSAGE_PAYLOAD": "{\"applicationIdentifier\": \"a7654718-435f-4765-a324-d2b6d682b964\", \"timestamp\": \"2025-07-22 13:24:29 001\", \"information\": {\"someDetails\": [{\"sourceName\": \"NONE\"}]}
I am using the Cisco Security Cloud integration in order to try and import my Duo logs into splunk enterprise (on prem). Following a plethora of directions, including Duo Splunk Connector guide I sti... See more...
I am using the Cisco Security Cloud integration in order to try and import my Duo logs into splunk enterprise (on prem). Following a plethora of directions, including Duo Splunk Connector guide I still cannot get it to work. No data goes through and it stays in a "Not Connected" status.  So far, I have verified that: - Admin API token has correct permissions - Integration is configured with correct admin api info like secret key, integration key, api hostname, etc.  - I am using the newest version of this app: Cisco Security Cloud    Does anyone have any tips for helping troubleshoot this issue? I cannot seem to find any logs or anything to even get a more advanced error code than "Not Connected" when I am pretty sure it should be working. 
That worked. Thank you so much. For other people that need help about this situation, a summary: My environment: a standalone Splunk Enterprise instance  on-prem exchange server 2019 in mailbox ... See more...
That worked. Thank you so much. For other people that need help about this situation, a summary: My environment: a standalone Splunk Enterprise instance  on-prem exchange server 2019 in mailbox role exhange server universal forwarder installed Actions to get exchange logs: on Splunk Enterprise instance :  deployed the Splunk add-on for Microsoft Exchange indexes (to easily manage indexes) Deploy the TA-Exchange-Mailbox add-on in the file at /opt/splunk/etc/apps/TA-Exchange-Mailbox Restart Splunk service on Exchange Server Deploy the TA-Exchange-Mailbox add-on at C:\Program Files\SplunkUniversalForwarder\etc\apps Restart the forwarder
1. I'm assuming you are aware of the field names case sensitivity and your field isn't by any chance named From, from or FrOm. 2. Is your search initiated by API running in the same user/app context... See more...
1. I'm assuming you are aware of the field names case sensitivity and your field isn't by any chance named From, from or FrOm. 2. Is your search initiated by API running in the same user/app context as the search spawned from web? It smells like some context mismatch resulting in wrongly/not extracted fields.
Hello, I start splunk 9.4.3 as a docker container from the image registry.hub.docker.com/splunk/splunk:latest. However, it terminates after approx. 60 seconds with the message: TASK [splunk_standa... See more...
Hello, I start splunk 9.4.3 as a docker container from the image registry.hub.docker.com/splunk/splunk:latest. However, it terminates after approx. 60 seconds with the message: TASK [splunk_standalone : Get existing HEC token] ****************************** fatal: [localhost]: FAILED! => { "changed": false } MSG: GET/services/data/inputs/http/splunk_hec_token?output_mode=jsonadmin********8089NoneNoneNone[200, 404];; AND excep_str: URL: https://127.0.0.1:8089/services/data/inputs/http/splunk_hec_token?output_mode=json; data: None, exception: API call for https://127.0.0.1:8089/services/data/inputs/http/splunk_hec_token?output_mode=json and data as None failed with status code 401: {"messages":[{"type": "ERROR", "text": "Unauthorised"}]}, failed with status code 401: {"messages":[{"type": "ERROR", "text": "Unauthorised"}]} PLAY RECAP ********************************************************************* localhost : ok=69 changed=3 unreachable=0 failed=1 skipped=69 rescued=0 ignored=0 If I start the container with "sleep infinity" and then exec into the container I can start splunk with "splunk start" and splunk works perfectly. Can anyone tell me what the problem is?
1. Yes.  If the UI cannot delete a KO then it must be removed by other means, including editing the .conf file.  Best Practice is to update the app that defines the KO and then re-install the app. 2... See more...
1. Yes.  If the UI cannot delete a KO then it must be removed by other means, including editing the .conf file.  Best Practice is to update the app that defines the KO and then re-install the app. 2. Yes, if disabling is available then that is a safe option. 3. Use btool.  It will apply proper config file precedence and show where each setting came from. splunk btool --debug <<config file base name>> list
I am having issues trying to outputlookup to a new empty KV Store lookup table I made. When I try to run the following search, I get this error:  Error in 'outputlookup' command: Lookup failed becau... See more...
I am having issues trying to outputlookup to a new empty KV Store lookup table I made. When I try to run the following search, I get this error:  Error in 'outputlookup' command: Lookup failed because collection '<collection>' in app 'SplunkEnterpriseSecuritySuite' does not exist, or user '<username>' does not have read access. | makeresults | eval <field_1>="test" | eval <field_2>="test" | eval <field_3>="test" | eval <field_4>="test" | fields - _time | outputlookup <collection> I redacted the actual data I am using, but it is formatted the same way as above. My KV Store file has global sharing and everyone can read/write, for testing purposes. What is wrong here and what can I do to fix this?
Hi @sabari80  Can you please verify your timezone in user prefercences   Based on your timezone prefrence alerts will run,  if timezones not in EST, kindly update them and verify under ... See more...
Hi @sabari80  Can you please verify your timezone in user prefercences   Based on your timezone prefrence alerts will run,  if timezones not in EST, kindly update them and verify under searches reports alerts   
Here are some internal logs: 2025-07-22T17:37:22.629Z I  NETWORK  [conn1078] Error receiving request from client: SSLHandshakeFailed: SSL peer certificate validation failed: self signed certific... See more...
Here are some internal logs: 2025-07-22T17:37:22.629Z I  NETWORK  [conn1078] Error receiving request from client: SSLHandshakeFailed: SSL peer certificate validation failed: self signed certificate in certificate chain. Ending connection from 127.0.0.1:43286 (connection id: 1078) 2025-07-22T17:37:22.629Z E  NETWORK  [conn1078] SSL peer certificate validation failed: self signed certificate in certificate chain T 2025-07-22T17:37:22.125Z I  NETWORK  [conn1077] Error receiving request from client: SSLHandshakeFailed: SSL peer certificate validation failed: self signed certificate in certificate chain. Ending connection from 127.0.0.1:43272 (connection id: 1077) h 2025-07-22T17:37:22.125Z E  NETWORK  [conn1077] SSL peer certificate validation failed: self signed certificate in certificate chain
Hello Splunk Community, I’m reaching out for guidance on handling Knowledge Objects (KOs) that reside in the default directory of their respective apps and cannot be deleted from the Splunk UI. ... See more...
Hello Splunk Community, I’m reaching out for guidance on handling Knowledge Objects (KOs) that reside in the default directory of their respective apps and cannot be deleted from the Splunk UI. We observed that: • Some KOs throw the message: “This saved search failed to handle removal request” which, as documented, is likely because the KO is defined in both the local and default directories. I have a couple of questions: 1. Can default directory KOs be deleted manually via the filesystem or another method, if not possible through the UI? 2. Is there a safe alternative such as disabling them if deletion is not possible? 3. From a list of KOs I have, how can I programmatically identify which ones reside in the default directory? Also, is there a recommended way to handle overlapping configurations between default and local directories, especially when clean-up or access revocation is needed? Any best practices, scripts, or documentation references would be greatly appreciated!