All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@vishalduttauk  In a regular search, RecipientAddress is extracted at search time, so you can use it directly in eval. But in Ingest Actions, you're working with the raw event stream before field ex... See more...
@vishalduttauk  In a regular search, RecipientAddress is extracted at search time, so you can use it directly in eval. But in Ingest Actions, you're working with the raw event stream before field extractions happen. But you can use this as workaround to drop events that contain this email address. NOT match(_raw, "splunk\.test@test\.co\.uk")   Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
@tomapatan  Can you try with below, search_query = ''' search index=my_index System="MySystem*" (Title=A OR Title=B OR Title=C OR Title=D OR Title=E OR Title=F OR Title=G) | eval include=if((Title=... See more...
@tomapatan  Can you try with below, search_query = ''' search index=my_index System="MySystem*" (Title=A OR Title=B OR Title=C OR Title=D OR Title=E OR Title=F OR Title=G) | eval include=if((Title="F" AND FROM="1") OR (Title="G" AND FROM="2") OR match(Title, "^[ABCDE]$"), 1, 0) | where include=1 ''' Note: since you are using python, hope you are using url encoding. Without encoding, the API may misinterpret or strip them. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!  
@AsmaF2025  try using a custom token like passed_time in the redirect URL and dashboard input Drilldown URL https://asdfghjkl:8000/en-US/app/app_name/dashboard_name?form.passed_time.earliest=$glob... See more...
@AsmaF2025  try using a custom token like passed_time in the redirect URL and dashboard input Drilldown URL https://asdfghjkl:8000/en-US/app/app_name/dashboard_name?form.passed_time.earliest=$global_time.earliest$&form.passed_time.latest=$global_time.latest$ on the redirecting dashbaord { "type": "input.timerange", "options": { "token": "passed_time", "defaultValue": "-24h@h,now" }, "title": "Global Time Range" } Then in your dashboard’s defaults section, "queryParameters": { "earliest": "$passed_time.earliest$", "latest": "$passed_time.latest$" } Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!  
Perfect That's what i wanted to know Many thanks
@moriteza  If it's after upgrading to 9.2+, add below configuration under outputs.conf in the deployment server, then restart splunk service in the deployment server. [indexAndForward] index = true... See more...
@moriteza  If it's after upgrading to 9.2+, add below configuration under outputs.conf in the deployment server, then restart splunk service in the deployment server. [indexAndForward] index = true selectiveIndexing = true #https://community.splunk.com/t5/Deployment-Architecture/The-Client-forwarder-management-not-showing-the-clients/m-p/677225 #https://help.splunk.com/en/splunk-enterprise/administer/manage-and-update-deployment-servers/9.2/configure-the-deployment-system/upgrade-pre-9.2-deployment-servers Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
@verbal_666  parallelIngestionPipelines = 2, this is considered the optimal setting for most deployments. Increasing it beyond 2 is technically feasible but generally not advised unless you proceed ... See more...
@verbal_666  parallelIngestionPipelines = 2, this is considered the optimal setting for most deployments. Increasing it beyond 2 is technically feasible but generally not advised unless you proceed with significant caution and have confirmed your infrastructure can support the additional load. I tested with 4(not more than this) but experienced instability, especially during bursty loads and when additional apps were introduced. For this reason, I’m keeping the setting at 2. This configuration has proven more stable in my environment. Theoretically ingest more data in parallel, when you set to 4. But high risk of OOM and crashes. Splunk highly recommends to consult PS if you want to set beyond 2. #https://help.splunk.com/en/splunk-enterprise/administer/manage-indexers-and-indexer-clusters/9.4/manage-indexes/manage-pipeline-sets-for-index-parallelization   Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Hello. I'm actually using a parallelIngestionPipelines = 2 feature on my Indexers. Works. Servers (Linux) are professional, with 24CPU and 48GB RAM.   I'm wondering, someone had ever tried a p... See more...
Hello. I'm actually using a parallelIngestionPipelines = 2 feature on my Indexers. Works. Servers (Linux) are professional, with 24CPU and 48GB RAM.   I'm wondering, someone had ever tried a parallelIngestionPipelines = 4 on his Indexers? Works? Crashes?   Thanks.
@nopera Install the add-on in the /opt/splunk/etc/apps directory on the HF. If you're using a deployment server and plan to deploy the add-on to a heavy forwarder (HF), place the add-on in the /opt... See more...
@nopera Install the add-on in the /opt/splunk/etc/apps directory on the HF. If you're using a deployment server and plan to deploy the add-on to a heavy forwarder (HF), place the add-on in the /opt/splunk/etc/deployment-apps directory on the deployment server. Then, create a server class, add the HF to that server class, associate the app with it, and deploy it to the HF.
I don't know why I have to run the following, and the spl2 file shows up.   ~/splunk/bin/splunk download-spl2-modules app spl2-test -dest default  But still, I am getting error when I try to run |... See more...
I don't know why I have to run the following, and the spl2 file shows up.   ~/splunk/bin/splunk download-spl2-modules app spl2-test -dest default  But still, I am getting error when I try to run |@spl2 from search1 Error in 'SearchParser': The SPL2 query is invalid: 'unknown error: Unable to fetch roles for the user'.
Am I missing something?   I have vscode running splunk extension and created a simple _default.spl2nb.   I'm able to testing it and getting results back, and uploading to the search app or a cu... See more...
Am I missing something?   I have vscode running splunk extension and created a simple _default.spl2nb.   I'm able to testing it and getting results back, and uploading to the search app or a custom app spl2-test also gives me success message.  But when I go to the splunk deployment <app>/default/data.  I don't see spl2 folder at all.  What's going on?  Thanks. 
Hi @studero  The error is being caused by misconfiguration in your /etc/otel/collector/agent_config.yaml file. Is it possible you can share (redacted if required) this file? The service.telemetry.m... See more...
Hi @studero  The error is being caused by misconfiguration in your /etc/otel/collector/agent_config.yaml file. Is it possible you can share (redacted if required) this file? The service.telemetry.metrics section contains an invalid "address" key based on the logs. As of Collector v0.123.0, the service::telemetry::metrics::address setting is ignored and instead should be configured as: service: telemetry: metrics: readers: - pull: exporter: prometheus: host: '0.0.0.0' port: 8888  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @tomapatan  Is your first "and" lowercase in both examples? This should be uppercase, if its made to uppercase does it behave as expected or do you still get the issue? Im just wondering if the U... See more...
Hi @tomapatan  Is your first "and" lowercase in both examples? This should be uppercase, if its made to uppercase does it behave as expected or do you still get the issue? Im just wondering if the UI does some correction before running the litsearch.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi Everyone, I`m running a query via the Splunk REST API (using  Python), and need to filter events based on the following requirements: - Always include events where TITLE is one of: A, B, C, D, E... See more...
Hi Everyone, I`m running a query via the Splunk REST API (using  Python), and need to filter events based on the following requirements: - Always include events where TITLE is one of: A, B, C, D, E - Only include events where TITLE=F and FROM=1 OR TITLE=G and FROM=2 This works fine in Splunk Web, but when sent via the REST API the conditional clause for TITLEs F and G don`t get applied correctly Works via Splunk WEB and REST (without filtering based on FROM) index=my_index System="MySystem*" Title=A OR Title=B OR Title=C OR Title=D OR Title=E OR Title=F OR Title=G   Works on WEB, not via REST (filtering based on FROM) index=my_index System="MySystem*" Title=A OR Title=B OR Title=C OR Title=D OR Title=E OR (Title=F and FROM=1) OR (Title=G AND FROM=2)   I`ve tried to apply the filtering downstream, but the issue persists. I’m unable to query a saved search because some fields are extracted at search time and aren’t available when accessed via the REST API. As a result, I need to extract those fields directly within the query itself when using the REST API. (Note: the TITLE field is being extracted correctly.)   Many thanks.  
Hi @dpridemore  Are you using Splunk Cloud Victoria or Splunk Cloud Classic? If you are on a classic stack then it could be that this requires manual installation by support as not self-servicable. ... See more...
Hi @dpridemore  Are you using Splunk Cloud Victoria or Splunk Cloud Classic? If you are on a classic stack then it could be that this requires manual installation by support as not self-servicable. However, the app shows that it supports up to Splunk 9.2 - your cloud stack is probably on a 9.3 build by now as 9.2 is getting old. Please could you confirm your cloud stack version? (Top right, Support & Services -> About)  Either way, you may be able to get this installed by going via Splunk Support so its worth logging a support case to see if they can help you out with this one.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
For creating fields dynamicaly you can use the {} syntax. Like | makeresults | eval field="field1",{field}="value" But the important question and a possible issue here is where did you get the mul... See more...
For creating fields dynamicaly you can use the {} syntax. Like | makeresults | eval field="field1",{field}="value" But the important question and a possible issue here is where did you get the multivalued fields from. Remember that two distinct multivalued fields are... well, distinct. There is no relationship between their values whatsoever. And if you are creating multivalued field by means of list() or values() and the original data didn't have some values, you can't tel, which ones were empty. You're just getting a "squashed" list as a result.
There can be several possible issues probably but since you say that the host has been "additionally hardened" I'd hazard a guess that you have applocker policy preventing unknown/not-whitelisted app... See more...
There can be several possible issues probably but since you say that the host has been "additionally hardened" I'd hazard a guess that you have applocker policy preventing unknown/not-whitelisted apps from running. Since the eventlogs are ingested by means of spawning external .exe, if it's not whitelisted, it will fail.
Hi @keen  Its odd that it would work once but then stop with that error. As far as I know, the settings page within the app only has a single encrypted value which is proxy_password - are you using ... See more...
Hi @keen  Its odd that it would work once but then stop with that error. As far as I know, the settings page within the app only has a single encrypted value which is proxy_password - are you using a proxy with the input? Are there any other error lines around the one you posted which might provide more information?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Akamai Guadicore App shows it is cloud app but I don't see it as option to install in Splunk Cloud. The add-on is available, just not the app. 
Hello Your questions are answered in the original post. Thank you
We are running Elasticsearch Data integrator -modular input to ingest logs from elasticsearch to Splunk. However, the app only works when Splunk is restarted and the app stops working a few minutes l... See more...
We are running Elasticsearch Data integrator -modular input to ingest logs from elasticsearch to Splunk. However, the app only works when Splunk is restarted and the app stops working a few minutes later until the next time Splunk is restarted again. Error message: ERROR PersistentScript [3778898 PersistentScriptIo] - From {/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/TA_elasticsearch_data_integrator___modular_input_rh_settings.py persistent}: f"Failed to get password of realm={self._realm}, user={user}." Can you help fix the problem?