All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you Prewin that has worked
@Showkat_CT  Splunk SOAR Cloud is a managed SaaS offering, you need a Splunk subscription or trial with SOAR enabled. You'll receive a dedicated SOAR Cloud instance and login credentials from Splunk... See more...
@Showkat_CT  Splunk SOAR Cloud is a managed SaaS offering, you need a Splunk subscription or trial with SOAR enabled. You'll receive a dedicated SOAR Cloud instance and login credentials from Splunk. If you haven’t yet, reach out to your Splunk account team or open a support case to get your Cloud environment provisioned. https://help.splunk.com/en/splunk-soar/soar-cloud/administer-soar-cloud/introduction-to-splunk-soar-cloud/administer-splunk-soar-cloud?utm_source=chatgpt.com  You'll receive a URL (like: https://<your-org>.soar.splunkcloud.com) plus a default admin email and temporary password. https://help.splunk.com/en/splunk-soar/soar-cloud/administer-soar-cloud/introduction-to-splunk-soar-cloud/take-a-tour-of-splunk-soar-cloud-and-perform-product-onboarding-when-you-log-in-for-the-first-time?utm_source=chatgpt.com 
Hello All, Can anyone tell me how i can access the splunk SOAR for Cloud?
Thanks, tried to filter downstream without success, unfortunately. I am using URL encoding.
CPU bottleneck.
Hi @verbal_666 , I tried parallelPipelines=4 but I came back to 2 because indexing was better than 2 but I had issues in searches that were slower. Ciao. Giuseppe
@PrewinThomas  @tej57  I have tried implementing the same, but for each redirection , the dashboard loads only the default value mentioned on the redirecting dashboard :  My redirecting dashboard ... See more...
@PrewinThomas  @tej57  I have tried implementing the same, but for each redirection , the dashboard loads only the default value mentioned on the redirecting dashboard :  My redirecting dashboard link :  https://host:8000/en-US/app/app_name/dashboard_name?form.time.earliest=2025-03-01T00:00:00.000&amp;form.time.latest=now I have also tried it by removing the default value , { "type": "input.timerange", "options": { "token": "passed_time" }, "title": "Global Time Range" } on this scenario i observed ,  my token is not being passed. and panels are showing waiting for input .  i validated it by capturing the tokens on the "ADD TEXT " field. the token passes its value, but my panel remained the same showing waiting for input. I have also tried , with different default value for the input , still the same.   "inputs": {         "input_global_trp": {             "type": "input.timerange",             "options": {                 "token": "passed_time",                 "defaultValue": "$passed_time.latest$,$passed_time.earliest$"             },             "title": "Global Time Range"         }     },   I have also remove the whole input , and only captured the token at :    "defaults": {         "dataSources": {             "ds.search": {                 "options": {                     "queryParameters": {                         "latest": "$passed_time.latest$",                         "earliest": "$passed_time.earliest$"                     }                 }             }         } Just so puzzled, why my token values are not passed to the source , but fine on the text box.  kindly advice 
Hi @livehybrid, Thanks for providing the information. With using rsync, tested with All-in-One and it is working. But in a situation whereby there's a cluster involved for the indexer/search head ... See more...
Hi @livehybrid, Thanks for providing the information. With using rsync, tested with All-in-One and it is working. But in a situation whereby there's a cluster involved for the indexer/search head and it is in a separate network/location. Will the same method work?? or there's another method to follow?
Hi @isoutamo,  Thanks for providing some of the article. For All-in-One, tested with using the rsync. Everything went quite smooth.  But in a situation whereby there's a cluster involved for the i... See more...
Hi @isoutamo,  Thanks for providing some of the article. For All-in-One, tested with using the rsync. Everything went quite smooth.  But in a situation whereby there's a cluster involved for the indexer/search head and it is in a separate network/location. Will the same method work?? or there's another method to follow?
Trying different token name on both dashabord doesnt work either.
Hi, The package of Splunk Otel has been update. But during the update, the configuration file has rename to *.newrpm and create a new one, like a default configuration file. I have rename the saved... See more...
Hi, The package of Splunk Otel has been update. But during the update, the configuration file has rename to *.newrpm and create a new one, like a default configuration file. I have rename the saved file *.newrpm to *.yaml and restart with success the service. Thanks for your help Olivier
First, I suspect that you meant the input looks like key values AdditionalInfo DeviceID DeviceType OS user has removed device with id "alpha_numeric_field" in area "alpha_numeric_f... See more...
First, I suspect that you meant the input looks like key values AdditionalInfo DeviceID DeviceType OS user has removed device with id "alpha_numeric_field" in area "alpha_numeric_field" for user "alpha_numeric_field". alpha_numeric_field mobile_device Windows Second, I have a question about the origin of "key" and "values".  Could they come from a structure such as JSON?  Maybe there is a better opportunity than at the end of processing. Third, I suspect that you meant "the output would be" AdditionalInfo DeviceID DeviceType OS user has removed device with id "alpha_numeric_field" in area "alpha_numeric_field" for user "alpha_numeric_field". alpha_numeric_field mobile_device Windows   Finally, if your Splunk is 8.1 or later, you can use JSON functions and the multivalue mode of foreach to do the job: | eval idx = mvrange(0, mvcount(key)) | eval keyvalue = json_object() | foreach idx mode=multivalue [eval keyvalue = json_set(keyvalue, mvindex(key, <<ITEM>>), mvindex(values, <<ITEM>>))] | spath input=keyvalue | fields - idx key values keyvalue Here is an emulation for you to play with and compare with real data | makeresults format=csv data="key,values AdditionalInfo,user has removed device with id \"alpha_numeric_field\" in area \"alpha_numeric_field\" for user \"alpha_numeric_field\". DeviceID,alpha_numeric_field DeviceType,mobile_device OS,Windows" | stats list(*) as * ``` data emulation above ```
@seetide  is this what you are trying to achieve?  Ignore events where "NONE" appears only in allowed fields (e.g., ALLOWED1, ALLOWED2, ALLOWED3). Include events where "NONE" appears in any other ... See more...
@seetide  is this what you are trying to achieve?  Ignore events where "NONE" appears only in allowed fields (e.g., ALLOWED1, ALLOWED2, ALLOWED3). Include events where "NONE" appears in any other field, even if it also appears in allowed fields. It will be great if you can post with some examples. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
@vishalduttauk  In a regular search, RecipientAddress is extracted at search time, so you can use it directly in eval. But in Ingest Actions, you're working with the raw event stream before field ex... See more...
@vishalduttauk  In a regular search, RecipientAddress is extracted at search time, so you can use it directly in eval. But in Ingest Actions, you're working with the raw event stream before field extractions happen. But you can use this as workaround to drop events that contain this email address. NOT match(_raw, "splunk\.test@test\.co\.uk")   Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
@tomapatan  Can you try with below, search_query = ''' search index=my_index System="MySystem*" (Title=A OR Title=B OR Title=C OR Title=D OR Title=E OR Title=F OR Title=G) | eval include=if((Title=... See more...
@tomapatan  Can you try with below, search_query = ''' search index=my_index System="MySystem*" (Title=A OR Title=B OR Title=C OR Title=D OR Title=E OR Title=F OR Title=G) | eval include=if((Title="F" AND FROM="1") OR (Title="G" AND FROM="2") OR match(Title, "^[ABCDE]$"), 1, 0) | where include=1 ''' Note: since you are using python, hope you are using url encoding. Without encoding, the API may misinterpret or strip them. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!  
@AsmaF2025  try using a custom token like passed_time in the redirect URL and dashboard input Drilldown URL https://asdfghjkl:8000/en-US/app/app_name/dashboard_name?form.passed_time.earliest=$glob... See more...
@AsmaF2025  try using a custom token like passed_time in the redirect URL and dashboard input Drilldown URL https://asdfghjkl:8000/en-US/app/app_name/dashboard_name?form.passed_time.earliest=$global_time.earliest$&form.passed_time.latest=$global_time.latest$ on the redirecting dashbaord { "type": "input.timerange", "options": { "token": "passed_time", "defaultValue": "-24h@h,now" }, "title": "Global Time Range" } Then in your dashboard’s defaults section, "queryParameters": { "earliest": "$passed_time.earliest$", "latest": "$passed_time.latest$" } Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!  
Perfect That's what i wanted to know Many thanks
@moriteza  If it's after upgrading to 9.2+, add below configuration under outputs.conf in the deployment server, then restart splunk service in the deployment server. [indexAndForward] index = true... See more...
@moriteza  If it's after upgrading to 9.2+, add below configuration under outputs.conf in the deployment server, then restart splunk service in the deployment server. [indexAndForward] index = true selectiveIndexing = true #https://community.splunk.com/t5/Deployment-Architecture/The-Client-forwarder-management-not-showing-the-clients/m-p/677225 #https://help.splunk.com/en/splunk-enterprise/administer/manage-and-update-deployment-servers/9.2/configure-the-deployment-system/upgrade-pre-9.2-deployment-servers Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
@verbal_666  parallelIngestionPipelines = 2, this is considered the optimal setting for most deployments. Increasing it beyond 2 is technically feasible but generally not advised unless you proceed ... See more...
@verbal_666  parallelIngestionPipelines = 2, this is considered the optimal setting for most deployments. Increasing it beyond 2 is technically feasible but generally not advised unless you proceed with significant caution and have confirmed your infrastructure can support the additional load. I tested with 4(not more than this) but experienced instability, especially during bursty loads and when additional apps were introduced. For this reason, I’m keeping the setting at 2. This configuration has proven more stable in my environment. Theoretically ingest more data in parallel, when you set to 4. But high risk of OOM and crashes. Splunk highly recommends to consult PS if you want to set beyond 2. #https://help.splunk.com/en/splunk-enterprise/administer/manage-indexers-and-indexer-clusters/9.4/manage-indexes/manage-pipeline-sets-for-index-parallelization   Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Hello. I'm actually using a parallelIngestionPipelines = 2 feature on my Indexers. Works. Servers (Linux) are professional, with 24CPU and 48GB RAM.   I'm wondering, someone had ever tried a p... See more...
Hello. I'm actually using a parallelIngestionPipelines = 2 feature on my Indexers. Works. Servers (Linux) are professional, with 24CPU and 48GB RAM.   I'm wondering, someone had ever tried a parallelIngestionPipelines = 4 on his Indexers? Works? Crashes?   Thanks.