All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

1. I'm assuming you are aware of the field names case sensitivity and your field isn't by any chance named From, from or FrOm. 2. Is your search initiated by API running in the same user/app context... See more...
1. I'm assuming you are aware of the field names case sensitivity and your field isn't by any chance named From, from or FrOm. 2. Is your search initiated by API running in the same user/app context as the search spawned from web? It smells like some context mismatch resulting in wrongly/not extracted fields.
Hello, I start splunk 9.4.3 as a docker container from the image registry.hub.docker.com/splunk/splunk:latest. However, it terminates after approx. 60 seconds with the message: TASK [splunk_standa... See more...
Hello, I start splunk 9.4.3 as a docker container from the image registry.hub.docker.com/splunk/splunk:latest. However, it terminates after approx. 60 seconds with the message: TASK [splunk_standalone : Get existing HEC token] ****************************** fatal: [localhost]: FAILED! => { "changed": false } MSG: GET/services/data/inputs/http/splunk_hec_token?output_mode=jsonadmin********8089NoneNoneNone[200, 404];; AND excep_str: URL: https://127.0.0.1:8089/services/data/inputs/http/splunk_hec_token?output_mode=json; data: None, exception: API call for https://127.0.0.1:8089/services/data/inputs/http/splunk_hec_token?output_mode=json and data as None failed with status code 401: {"messages":[{"type": "ERROR", "text": "Unauthorised"}]}, failed with status code 401: {"messages":[{"type": "ERROR", "text": "Unauthorised"}]} PLAY RECAP ********************************************************************* localhost : ok=69 changed=3 unreachable=0 failed=1 skipped=69 rescued=0 ignored=0 If I start the container with "sleep infinity" and then exec into the container I can start splunk with "splunk start" and splunk works perfectly. Can anyone tell me what the problem is?
1. Yes.  If the UI cannot delete a KO then it must be removed by other means, including editing the .conf file.  Best Practice is to update the app that defines the KO and then re-install the app. 2... See more...
1. Yes.  If the UI cannot delete a KO then it must be removed by other means, including editing the .conf file.  Best Practice is to update the app that defines the KO and then re-install the app. 2. Yes, if disabling is available then that is a safe option. 3. Use btool.  It will apply proper config file precedence and show where each setting came from. splunk btool --debug <<config file base name>> list
I am having issues trying to outputlookup to a new empty KV Store lookup table I made. When I try to run the following search, I get this error:  Error in 'outputlookup' command: Lookup failed becau... See more...
I am having issues trying to outputlookup to a new empty KV Store lookup table I made. When I try to run the following search, I get this error:  Error in 'outputlookup' command: Lookup failed because collection '<collection>' in app 'SplunkEnterpriseSecuritySuite' does not exist, or user '<username>' does not have read access. | makeresults | eval <field_1>="test" | eval <field_2>="test" | eval <field_3>="test" | eval <field_4>="test" | fields - _time | outputlookup <collection> I redacted the actual data I am using, but it is formatted the same way as above. My KV Store file has global sharing and everyone can read/write, for testing purposes. What is wrong here and what can I do to fix this?
Hi @sabari80  Can you please verify your timezone in user prefercences   Based on your timezone prefrence alerts will run,  if timezones not in EST, kindly update them and verify under ... See more...
Hi @sabari80  Can you please verify your timezone in user prefercences   Based on your timezone prefrence alerts will run,  if timezones not in EST, kindly update them and verify under searches reports alerts   
Here are some internal logs: 2025-07-22T17:37:22.629Z I  NETWORK  [conn1078] Error receiving request from client: SSLHandshakeFailed: SSL peer certificate validation failed: self signed certific... See more...
Here are some internal logs: 2025-07-22T17:37:22.629Z I  NETWORK  [conn1078] Error receiving request from client: SSLHandshakeFailed: SSL peer certificate validation failed: self signed certificate in certificate chain. Ending connection from 127.0.0.1:43286 (connection id: 1078) 2025-07-22T17:37:22.629Z E  NETWORK  [conn1078] SSL peer certificate validation failed: self signed certificate in certificate chain T 2025-07-22T17:37:22.125Z I  NETWORK  [conn1077] Error receiving request from client: SSLHandshakeFailed: SSL peer certificate validation failed: self signed certificate in certificate chain. Ending connection from 127.0.0.1:43272 (connection id: 1077) h 2025-07-22T17:37:22.125Z E  NETWORK  [conn1077] SSL peer certificate validation failed: self signed certificate in certificate chain
Hello Splunk Community, I’m reaching out for guidance on handling Knowledge Objects (KOs) that reside in the default directory of their respective apps and cannot be deleted from the Splunk UI. ... See more...
Hello Splunk Community, I’m reaching out for guidance on handling Knowledge Objects (KOs) that reside in the default directory of their respective apps and cannot be deleted from the Splunk UI. We observed that: • Some KOs throw the message: “This saved search failed to handle removal request” which, as documented, is likely because the KO is defined in both the local and default directories. I have a couple of questions: 1. Can default directory KOs be deleted manually via the filesystem or another method, if not possible through the UI? 2. Is there a safe alternative such as disabling them if deletion is not possible? 3. From a list of KOs I have, how can I programmatically identify which ones reside in the default directory? Also, is there a recommended way to handle overlapping configurations between default and local directories, especially when clean-up or access revocation is needed? Any best practices, scripts, or documentation references would be greatly appreciated!
Hi @Raja_Selvaraj  DATETIME_CONFIG = CURRENT it should work as expected. Can you please run btool command to check if DATETIME_CONFIG taking effect or any config overriding it. splunk btool ... See more...
Hi @Raja_Selvaraj  DATETIME_CONFIG = CURRENT it should work as expected. Can you please run btool command to check if DATETIME_CONFIG taking effect or any config overriding it. splunk btool props list <sourcetype> --debug   above comand should list datetime_config  sample format in props.conf  [<sourcetype>] DATETIME_CONFIG=CURRENT 
Hi, I upgraded Splunk Enterprise from 9.2.3 to 9.4.3, and the KVSotre status is failed. It was migrated successfully to 7.0.14 on one server automatically, but on the second server, migration did... See more...
Hi, I upgraded Splunk Enterprise from 9.2.3 to 9.4.3, and the KVSotre status is failed. It was migrated successfully to 7.0.14 on one server automatically, but on the second server, migration did not start upon upgrade.  Is there a solution to restore the KVstore status and migrate to  7.0.14? It is a standalone server and not part of clustered environment. Some servers also have KVStore status Failed on the version 9.2.3, and I want to change the status before starting to upgrade them to 9.4.3 This member: backupRestoreStatus : Ready disabled : 0 featureCompatibilityVersion : An error occurred during the last operation ('getParameter', domain: '15', code: '13053'): No suitable servers found: `serverSelectionTimeoutMS` expired: [Failed to connect to target host: 127.0.0.1:8191] guid : xzy port : 8191 standalone : 1 status : failed storageEngine : wiredTiger versionUpgradeInProgress : 0
Hi Everyone, Please help me regarding this ask - i need the splunk to show the respective events with the current date instead of the date when the file being placed in the host. For instance, like ... See more...
Hi Everyone, Please help me regarding this ask - i need the splunk to show the respective events with the current date instead of the date when the file being placed in the host. For instance, like the file been placed in server dated 17th july and the events are showing with date 17th july instead i want with the current date.  If the current date 22nd July, then event's date should mentioned as 22nd July and likewise. I have tried with DATETIME_CONFIG = CURRENT and DATETIME_CONFIG = NONE in props.conf but it doesn't work.  
I am having a similar issue however in my case the field always has a suffix of sophos_event_input after the username. Example User Joe-Smith, Adams sophos_event_input Jane-Doe, Smith sophos_event... See more...
I am having a similar issue however in my case the field always has a suffix of sophos_event_input after the username. Example User Joe-Smith, Adams sophos_event_input Jane-Doe, Smith sophos_event_input I would like to change the User field to User Joe-Smith, Adams  Jane-Doe, Smith  Basically I want to get rid of the sophos_event_input suffix. How will I go about this? 
I have a scheduled export report for daily 11PM from my monitoring dashboard. we are in EST time zone and my dashboard is providing the data as expected. But the report pdf in the mail is having data... See more...
I have a scheduled export report for daily 11PM from my monitoring dashboard. we are in EST time zone and my dashboard is providing the data as expected. But the report pdf in the mail is having data based on 'GMT' time zone and its not matching with my dashboard numbers.  For Ex: Expected Time frame is : 00:00 and 23:00 on 7/22 Getting GMT timeframe: 00:00 and 03:00 on 7/23 How to fix this? thanks in advance. 
Hi @kennsche , the role limitations are for all searches and dashboards. So you could create a role with the time window limitation, assigning this role some of your users and enable to use the das... See more...
Hi @kennsche , the role limitations are for all searches and dashboards. So you could create a role with the time window limitation, assigning this role some of your users and enable to use the dashboard only that role. Otherwise, the only solution is to create a list of possible time periods (e.g. 5m, 10m, 15m, 30m, 60m 90m, 120m) and display it in a dropdown list. But this solution is applicable only to a dashboard, not to search. Ciao. Giuseppe
Hi @gcusello thanks for the suggestion! Since I have two tabs, would the role approach be granular enough to limit the search to one tab within the same dashboard? The other tab should not be limi... See more...
Hi @gcusello thanks for the suggestion! Since I have two tabs, would the role approach be granular enough to limit the search to one tab within the same dashboard? The other tab should not be limited. Regards Kenny
That is working perfectly, thank you.
Hi @dsgoody  Firstly, the force_local_processing is only needed if you're running a Universal Forwarder. If its a Heavy Forwarder then you can safely remove this. I think the main issue here is the... See more...
Hi @dsgoody  Firstly, the force_local_processing is only needed if you're running a Universal Forwarder. If its a Heavy Forwarder then you can safely remove this. I think the main issue here is the stanza name - If you're referencing based on source then you need to use something like: # props.conf [source::UDP:<port>] Alternatively you can apply the transforms based on the sourcetype: # props.conf [juniper]  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @kennsche , in [Settings > User interface > Time ranges] you can define the time ranges that a role finds in the default choices, but you don't limit the possibility to have a larger time period.... See more...
Hi @kennsche , in [Settings > User interface > Time ranges] you can define the time ranges that a role finds in the default choices, but you don't limit the possibility to have a larger time period. So the most efficient way to really limit the time period in searches, is to create a role dedicated to your users and then add a limit in [Settings > Roles > Click on role > Resources > Role search time window limit]. Ciao. Giuseppe
Hi all, I'm having some issues excluding events from our Juniper SRX logs. These events are ingested directly on our Windows Splunk Heavy Forwarders, since these two firewalls are the only syslog in... See more...
Hi all, I'm having some issues excluding events from our Juniper SRX logs. These events are ingested directly on our Windows Splunk Heavy Forwarders, since these two firewalls are the only syslog inputs we have. My current config is as follows; inputs.conf [udp://firewallip:port] connection_host = ip disabled = false index = juniper sourcetype = juniper props.conf [udp://firewallip:port] TRANSFORMS-null=TenantToTrust,TrustToTenant force_local_processing = true transforms.conf [TenantToTrust] REGEX = source-zone-name="tenant".*destination-zone-name="trust" DEST_KEY = queue FORMAT = nullQueue [TrustToTenant] REGEX = source-zone-name="trust".*destination-zone-name="tenant" DEST_KEY = queue FORMAT = nullQueue All we'd like to do is exclude any events where the source and destination zones are both tenant or trust. Any idea where I might be going wrong? Thanks.
Hello everyone, I am using Splunk Studio to create a dashboard with two tabs. Enterprise version 9.4.1. Both tabs are visually identical but in tab 1, I am quering summarized indexes whereas for t... See more...
Hello everyone, I am using Splunk Studio to create a dashboard with two tabs. Enterprise version 9.4.1. Both tabs are visually identical but in tab 1, I am quering summarized indexes whereas for the second tab, I am running normal queries. 'Normal' queries in this tab can be very intensive if a long time range is selected, therefore, I am trying to limit the time selection to a maximum range of two hours. It could be in any day but the duration between start and end time should not exceed 2 hours. (Not latest 2hours) I've tried editing XML by following some AI suggestions. Most suggestions relied on changing the query itself but this was breaking the query and returning no results in the end. Wondering if someone has already any insights how to do this or could guide me in the right direction? Visually it would look like this:
This looks very promising. Thank you for your valued input!