All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Team, I have been getting a skipped search notification in my CMC overview under Health from quite some time. It is a scheduled report Search name: ESS - Notable Events Cron: every 5 mins ( ... See more...
Hi Team, I have been getting a skipped search notification in my CMC overview under Health from quite some time. It is a scheduled report Search name: ESS - Notable Events Cron: every 5 mins ( 1-59/5 * * * *) Timerange:  earliest - 48d@d  ; latest - +0s (now) Message: The maximum number of concurrent running jobs for this historical scheduled search on this cluster has been reached Search query:  `notable` | search NOT `suppression` | eval timeDiff_type=case(_time>=relative_time(now(), "-24h@h"),"current", 1=1, "historical") | expandtoken rule_title | table _time,event_id,security_domain,urgency,rule_name,rule_title,src,dest,src_user,user,dvc,status,status_group,owner,timeDiff_type,governance,control | outputlookup es_notable_events | stats count It is writing the output to an output-lookup.  and takes around 8 mins as runtime when checked under job management. Can some help me understand where the issue lies, what's making this search in particular to skip. The percentage skipped it around 50% and the status is critical.  
We are building an iOS app that using URLSession for making network traffics in our app. But AppDynamics does not collect any traffics which built with async/await. For the traffics that use the trad... See more...
We are building an iOS app that using URLSession for making network traffics in our app. But AppDynamics does not collect any traffics which built with async/await. For the traffics that use the traditional ways (completion handler) the AppD still collect properly. AppDynamics 
Hello Community, when we try to open a link to a Splunk Url without language setting, e.g. via the "Show results"-link in an email alert like  https://our-splunk-address/app/some-app/alert?s=some-... See more...
Hello Community, when we try to open a link to a Splunk Url without language setting, e.g. via the "Show results"-link in an email alert like  https://our-splunk-address/app/some-app/alert?s=some-alert the request gets redirected automatically to something like https://our-splunk-address/de/app/some-app/alert?s=some-alert which does not work. The Url should be https://our-splunk-address/de-DE/app/some-app/alert?s=some-alert (see Configure user language and locale | Splunk Docs) This incorrect redirect only happens in our productive environment  and only if the language setting of the browser is set to german. English works fine (redirect is .../en-GB/...) We tested different browsers (Edge, Firefox) with same results. Our test environment uses the same browsers, redirects correctly and we can't fathom any configurations differences between our test and production that could explain this bevaviour. Did you experience a similar phenomenen or can give me a hint where I can look for further clues? Regards, Jens
I am trying to remove a field which  has a suffix of sophos_event_input after the username. Example Username_Field Joe-Smith, Adams sophos_event_input Jane-Doe, Smith sophos_event_input I would l... See more...
I am trying to remove a field which  has a suffix of sophos_event_input after the username. Example Username_Field Joe-Smith, Adams sophos_event_input Jane-Doe, Smith sophos_event_input I would like to change the Username field to only contain the users name, Example Username_Field Joe-Smith, Adams  Jane-Doe, Smith  Basically I want to get rid of the sophos_event_input suffix. How will I go about this? 
I wanna ask something on my lab clustered indexer. I got max primary capacity on my indexer. Last time i just reduce the maxWarmDBCount based on the existing db created, and after 1-2 day, it rolled ... See more...
I wanna ask something on my lab clustered indexer. I got max primary capacity on my indexer. Last time i just reduce the maxWarmDBCount based on the existing db created, and after 1-2 day, it rolled to secondary storage. But this time i apply this modification again but it not rolled until one week. I also already crosscheck on the DB created on the index created and it still above the limit i set, i also check if the config already distributed. Anyone faced this scenario?  #splunk
I don't understand why the legacy 'run a script' alert action has been deprecated.  The official guidelines to create a 'Custom Alert Action' are to complicated to follow. I attempted to find a guid... See more...
I don't understand why the legacy 'run a script' alert action has been deprecated.  The official guidelines to create a 'Custom Alert Action' are to complicated to follow. I attempted to find a guide from Google, but there are too many conflicting methods, and I consistently failed to implement them. I just want a simple and straightforward guide to create a 'Custom Alert Action'  that runs a batch file (script.bat) or a PowerShell script file (script.ps1) when the alert is triggered.  Or just create a 'custom alert action' that exactly do the same thing as the deprecated 'run a script' alert action. (Just type the batch file name and that's it)   Environment: Splunk Enterprise 9.1 (Windows)  
Dears, Hope you are doing well, I would like to request your assistance regarding an issue we've encountered after upgrading Splunk Enterprise from version 9.1.5 to 9.4.0. Since the upgrade, th... See more...
Dears, Hope you are doing well, I would like to request your assistance regarding an issue we've encountered after upgrading Splunk Enterprise from version 9.1.5 to 9.4.0. Since the upgrade, the Forwarder Management (Deployment Server) functionality is no longer working as expected. Despite multiple troubleshooting attempts, the issue persists. I have attached a screenshot showing the specific error encountered. I would greatly appreciate your guidance or recommendations to help resolve this matter. Please let me know if any additional logs or configuration details are needed.     Thank you in advance for your support.
I am using the Cisco Security Cloud integration in order to try and import my Duo logs into splunk enterprise (on prem). Following a plethora of directions, including Duo Splunk Connector guide I sti... See more...
I am using the Cisco Security Cloud integration in order to try and import my Duo logs into splunk enterprise (on prem). Following a plethora of directions, including Duo Splunk Connector guide I still cannot get it to work. No data goes through and it stays in a "Not Connected" status.  So far, I have verified that: - Admin API token has correct permissions - Integration is configured with correct admin api info like secret key, integration key, api hostname, etc.  - I am using the newest version of this app: Cisco Security Cloud    Does anyone have any tips for helping troubleshoot this issue? I cannot seem to find any logs or anything to even get a more advanced error code than "Not Connected" when I am pretty sure it should be working. 
Hello, I start splunk 9.4.3 as a docker container from the image registry.hub.docker.com/splunk/splunk:latest. However, it terminates after approx. 60 seconds with the message: TASK [splunk_standa... See more...
Hello, I start splunk 9.4.3 as a docker container from the image registry.hub.docker.com/splunk/splunk:latest. However, it terminates after approx. 60 seconds with the message: TASK [splunk_standalone : Get existing HEC token] ****************************** fatal: [localhost]: FAILED! => { "changed": false } MSG: GET/services/data/inputs/http/splunk_hec_token?output_mode=jsonadmin********8089NoneNoneNone[200, 404];; AND excep_str: URL: https://127.0.0.1:8089/services/data/inputs/http/splunk_hec_token?output_mode=json; data: None, exception: API call for https://127.0.0.1:8089/services/data/inputs/http/splunk_hec_token?output_mode=json and data as None failed with status code 401: {"messages":[{"type": "ERROR", "text": "Unauthorised"}]}, failed with status code 401: {"messages":[{"type": "ERROR", "text": "Unauthorised"}]} PLAY RECAP ********************************************************************* localhost : ok=69 changed=3 unreachable=0 failed=1 skipped=69 rescued=0 ignored=0 If I start the container with "sleep infinity" and then exec into the container I can start splunk with "splunk start" and splunk works perfectly. Can anyone tell me what the problem is?
I am having issues trying to outputlookup to a new empty KV Store lookup table I made. When I try to run the following search, I get this error:  Error in 'outputlookup' command: Lookup failed becau... See more...
I am having issues trying to outputlookup to a new empty KV Store lookup table I made. When I try to run the following search, I get this error:  Error in 'outputlookup' command: Lookup failed because collection '<collection>' in app 'SplunkEnterpriseSecuritySuite' does not exist, or user '<username>' does not have read access. | makeresults | eval <field_1>="test" | eval <field_2>="test" | eval <field_3>="test" | eval <field_4>="test" | fields - _time | outputlookup <collection> I redacted the actual data I am using, but it is formatted the same way as above. My KV Store file has global sharing and everyone can read/write, for testing purposes. What is wrong here and what can I do to fix this?
Hello Splunk Community, I’m reaching out for guidance on handling Knowledge Objects (KOs) that reside in the default directory of their respective apps and cannot be deleted from the Splunk UI. ... See more...
Hello Splunk Community, I’m reaching out for guidance on handling Knowledge Objects (KOs) that reside in the default directory of their respective apps and cannot be deleted from the Splunk UI. We observed that: • Some KOs throw the message: “This saved search failed to handle removal request” which, as documented, is likely because the KO is defined in both the local and default directories. I have a couple of questions: 1. Can default directory KOs be deleted manually via the filesystem or another method, if not possible through the UI? 2. Is there a safe alternative such as disabling them if deletion is not possible? 3. From a list of KOs I have, how can I programmatically identify which ones reside in the default directory? Also, is there a recommended way to handle overlapping configurations between default and local directories, especially when clean-up or access revocation is needed? Any best practices, scripts, or documentation references would be greatly appreciated!
Hi, I upgraded Splunk Enterprise from 9.2.3 to 9.4.3, and the KVSotre status is failed. It was migrated successfully to 7.0.14 on one server automatically, but on the second server, migration did... See more...
Hi, I upgraded Splunk Enterprise from 9.2.3 to 9.4.3, and the KVSotre status is failed. It was migrated successfully to 7.0.14 on one server automatically, but on the second server, migration did not start upon upgrade.  Is there a solution to restore the KVstore status and migrate to  7.0.14? It is a standalone server and not part of clustered environment. Some servers also have KVStore status Failed on the version 9.2.3, and I want to change the status before starting to upgrade them to 9.4.3 This member: backupRestoreStatus : Ready disabled : 0 featureCompatibilityVersion : An error occurred during the last operation ('getParameter', domain: '15', code: '13053'): No suitable servers found: `serverSelectionTimeoutMS` expired: [Failed to connect to target host: 127.0.0.1:8191] guid : xzy port : 8191 standalone : 1 status : failed storageEngine : wiredTiger versionUpgradeInProgress : 0
Hi Everyone, Please help me regarding this ask - i need the splunk to show the respective events with the current date instead of the date when the file being placed in the host. For instance, like ... See more...
Hi Everyone, Please help me regarding this ask - i need the splunk to show the respective events with the current date instead of the date when the file being placed in the host. For instance, like the file been placed in server dated 17th july and the events are showing with date 17th july instead i want with the current date.  If the current date 22nd July, then event's date should mentioned as 22nd July and likewise. I have tried with DATETIME_CONFIG = CURRENT and DATETIME_CONFIG = NONE in props.conf but it doesn't work.  
I have a scheduled export report for daily 11PM from my monitoring dashboard. we are in EST time zone and my dashboard is providing the data as expected. But the report pdf in the mail is having data... See more...
I have a scheduled export report for daily 11PM from my monitoring dashboard. we are in EST time zone and my dashboard is providing the data as expected. But the report pdf in the mail is having data based on 'GMT' time zone and its not matching with my dashboard numbers.  For Ex: Expected Time frame is : 00:00 and 23:00 on 7/22 Getting GMT timeframe: 00:00 and 03:00 on 7/23 How to fix this? thanks in advance. 
Hi all, I'm having some issues excluding events from our Juniper SRX logs. These events are ingested directly on our Windows Splunk Heavy Forwarders, since these two firewalls are the only syslog in... See more...
Hi all, I'm having some issues excluding events from our Juniper SRX logs. These events are ingested directly on our Windows Splunk Heavy Forwarders, since these two firewalls are the only syslog inputs we have. My current config is as follows; inputs.conf [udp://firewallip:port] connection_host = ip disabled = false index = juniper sourcetype = juniper props.conf [udp://firewallip:port] TRANSFORMS-null=TenantToTrust,TrustToTenant force_local_processing = true transforms.conf [TenantToTrust] REGEX = source-zone-name="tenant".*destination-zone-name="trust" DEST_KEY = queue FORMAT = nullQueue [TrustToTenant] REGEX = source-zone-name="trust".*destination-zone-name="tenant" DEST_KEY = queue FORMAT = nullQueue All we'd like to do is exclude any events where the source and destination zones are both tenant or trust. Any idea where I might be going wrong? Thanks.
Hello everyone, I am using Splunk Studio to create a dashboard with two tabs. Enterprise version 9.4.1. Both tabs are visually identical but in tab 1, I am quering summarized indexes whereas for t... See more...
Hello everyone, I am using Splunk Studio to create a dashboard with two tabs. Enterprise version 9.4.1. Both tabs are visually identical but in tab 1, I am quering summarized indexes whereas for the second tab, I am running normal queries. 'Normal' queries in this tab can be very intensive if a long time range is selected, therefore, I am trying to limit the time selection to a maximum range of two hours. It could be in any day but the duration between start and end time should not exceed 2 hours. (Not latest 2hours) I've tried editing XML by following some AI suggestions. Most suggestions relied on changing the query itself but this was breaking the query and returning no results in the end. Wondering if someone has already any insights how to do this or could guide me in the right direction? Visually it would look like this:
Hello All, Can anyone tell me how i can access the splunk SOAR for Cloud?
Hello. I'm actually using a parallelIngestionPipelines = 2 feature on my Indexers. Works. Servers (Linux) are professional, with 24CPU and 48GB RAM.   I'm wondering, someone had ever tried a p... See more...
Hello. I'm actually using a parallelIngestionPipelines = 2 feature on my Indexers. Works. Servers (Linux) are professional, with 24CPU and 48GB RAM.   I'm wondering, someone had ever tried a parallelIngestionPipelines = 4 on his Indexers? Works? Crashes?   Thanks.
Am I missing something?   I have vscode running splunk extension and created a simple _default.spl2nb.   I'm able to testing it and getting results back, and uploading to the search app or a cu... See more...
Am I missing something?   I have vscode running splunk extension and created a simple _default.spl2nb.   I'm able to testing it and getting results back, and uploading to the search app or a custom app spl2-test also gives me success message.  But when I go to the splunk deployment <app>/default/data.  I don't see spl2 folder at all.  What's going on?  Thanks. 
Hi Everyone, I`m running a query via the Splunk REST API (using  Python), and need to filter events based on the following requirements: - Always include events where TITLE is one of: A, B, C, D, E... See more...
Hi Everyone, I`m running a query via the Splunk REST API (using  Python), and need to filter events based on the following requirements: - Always include events where TITLE is one of: A, B, C, D, E - Only include events where TITLE=F and FROM=1 OR TITLE=G and FROM=2 This works fine in Splunk Web, but when sent via the REST API the conditional clause for TITLEs F and G don`t get applied correctly Works via Splunk WEB and REST (without filtering based on FROM) index=my_index System="MySystem*" Title=A OR Title=B OR Title=C OR Title=D OR Title=E OR Title=F OR Title=G   Works on WEB, not via REST (filtering based on FROM) index=my_index System="MySystem*" Title=A OR Title=B OR Title=C OR Title=D OR Title=E OR (Title=F and FROM=1) OR (Title=G AND FROM=2)   I`ve tried to apply the filtering downstream, but the issue persists. I’m unable to query a saved search because some fields are extracted at search time and aren’t available when accessed via the REST API. As a result, I need to extract those fields directly within the query itself when using the REST API. (Note: the TITLE field is being extracted correctly.)   Many thanks.