All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

This looks very promising. Thank you for your valued input!
As usual - "it depends". During normal indexing a single pipeline engages 4-6CPU. So if you have a host which does nothing but ingestion processing (a HF), you can relatively harmlessly raise your n... See more...
As usual - "it depends". During normal indexing a single pipeline engages 4-6CPU. So if you have a host which does nothing but ingestion processing (a HF), you can relatively harmlessly raise your number of pipelines and the performance scales quite well (maybe not straight linearliy but not much worse). But on an indexer you have to remember about two things: 1) You're still limited by the fact that you have to write all that to disk at the end of the pipeline (so the performance improvement will be significantly less than linear). 2) Typically indexers mostly do searching after all. So tying CPUs to ingest processing leaves you with much less left resources for searching. That might lead to problems with long running/delayed/skipped searches. So on a modern reasonably sized box, with a typical use case indeed 1 or 2 parallel ingestion pipelines seem the optimal settings. With a slightly atypical architecture (for example a separate HF layer which does the heavy lifting and indexers only receive the parsed data and write it to disks), you could consider raising the parameter more.
Hi @nopera  The docs state "You ONLY need to install these add-ons on FORWARDERS." - The emphasis on ONLY is their wording not mine! However after investigating the contents of the app its clear ther... See more...
Hi @nopera  The docs state "You ONLY need to install these add-ons on FORWARDERS." - The emphasis on ONLY is their wording not mine! However after investigating the contents of the app its clear there are field extractions which need to be on your Searchhead and time/event parsing that needs to be on your indexers (since you are using a Universal Forwarder). Please install the app on your Searchheads and Indexers using your usual app deployment approach and this should provide the relevant field extraction / CIM compliance.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing    
@nopera  I recommend installing the add-on on both the indexers and the search heads. Indexers are responsible for index-time operations such as parsing, data transformation, and routing. Therefore... See more...
@nopera  I recommend installing the add-on on both the indexers and the search heads. Indexers are responsible for index-time operations such as parsing, data transformation, and routing. Therefore, any add-on containing props.conf or transforms.conf should be deployed to the indexers. Search Heads handle search-time functions, including dashboards, lookups, macros, and CIM mappings. While it's safe to install the add-on on the search heads for search-time functionality, doing so won’t interfere with index-time processes, provided those configurations are also present on the indexers. In general, it's best practice to install the add-on across all relevant tiers, indexers, search heads, and forwarders, and enable only the necessary components on each, depending on the role of the system. https://docs.splunk.com/Documentation/AddOns/released/Overview/Wheretoinstall 
@nopera  If you are using indexers (or a standalone Splunk Enterprise instance), follow these steps: Deploy the TA-Exchange-Mailbox add-on to the indexer at the following path: /opt/splunk/etc/a... See more...
@nopera  If you are using indexers (or a standalone Splunk Enterprise instance), follow these steps: Deploy the TA-Exchange-Mailbox add-on to the indexer at the following path: /opt/splunk/etc/apps/TA-Exchange-Mailbox Restart the Splunk service on the indexer to apply the changes. On the Universal Forwarder, verify that the inputs.conf is correctly configured with the appropriate sourcetype for message tracking logs.
  @kiran_panchavat    I dont use heavy forwarder, i installed universal forwarded to the exchange server, i placed the add-on "TA-Exchange-Mailbox" (server is in mailbox role) to the path "C:\Prog... See more...
  @kiran_panchavat    I dont use heavy forwarder, i installed universal forwarded to the exchange server, i placed the add-on "TA-Exchange-Mailbox" (server is in mailbox role) to the path "C:\Program Files\SplunkUniversalForwarder\etc\apps". Now i am getting the logs but message tracking logs arent parsed correctly.  What should I do now? Example logs below from test env.            
Did you get answer to this ? Can u help with resolution you obtained?
Thanks, "AND" is uppercase in both examples, but the issue persists. I followed your suggestion and checked the search job properties and the eventSearch changes to: index=my_index System="MySy... See more...
Thanks, "AND" is uppercase in both examples, but the issue persists. I followed your suggestion and checked the search job properties and the eventSearch changes to: index=my_index System="MySystem*" (Title=A OR Title=B OR Title=C OR Title=D OR Title=E OR (Title=F FROM=1) OR (Title=G FROM=2))   Still not working via REST, unfortunately.
Thank you Prewin that has worked
@Showkat_CT  Splunk SOAR Cloud is a managed SaaS offering, you need a Splunk subscription or trial with SOAR enabled. You'll receive a dedicated SOAR Cloud instance and login credentials from Splunk... See more...
@Showkat_CT  Splunk SOAR Cloud is a managed SaaS offering, you need a Splunk subscription or trial with SOAR enabled. You'll receive a dedicated SOAR Cloud instance and login credentials from Splunk. If you haven’t yet, reach out to your Splunk account team or open a support case to get your Cloud environment provisioned. https://help.splunk.com/en/splunk-soar/soar-cloud/administer-soar-cloud/introduction-to-splunk-soar-cloud/administer-splunk-soar-cloud?utm_source=chatgpt.com  You'll receive a URL (like: https://<your-org>.soar.splunkcloud.com) plus a default admin email and temporary password. https://help.splunk.com/en/splunk-soar/soar-cloud/administer-soar-cloud/introduction-to-splunk-soar-cloud/take-a-tour-of-splunk-soar-cloud-and-perform-product-onboarding-when-you-log-in-for-the-first-time?utm_source=chatgpt.com 
Hello All, Can anyone tell me how i can access the splunk SOAR for Cloud?
Thanks, tried to filter downstream without success, unfortunately. I am using URL encoding.
CPU bottleneck.
Hi @verbal_666 , I tried parallelPipelines=4 but I came back to 2 because indexing was better than 2 but I had issues in searches that were slower. Ciao. Giuseppe
@PrewinThomas  @tej57  I have tried implementing the same, but for each redirection , the dashboard loads only the default value mentioned on the redirecting dashboard :  My redirecting dashboard ... See more...
@PrewinThomas  @tej57  I have tried implementing the same, but for each redirection , the dashboard loads only the default value mentioned on the redirecting dashboard :  My redirecting dashboard link :  https://host:8000/en-US/app/app_name/dashboard_name?form.time.earliest=2025-03-01T00:00:00.000&amp;form.time.latest=now I have also tried it by removing the default value , { "type": "input.timerange", "options": { "token": "passed_time" }, "title": "Global Time Range" } on this scenario i observed ,  my token is not being passed. and panels are showing waiting for input .  i validated it by capturing the tokens on the "ADD TEXT " field. the token passes its value, but my panel remained the same showing waiting for input. I have also tried , with different default value for the input , still the same.   "inputs": {         "input_global_trp": {             "type": "input.timerange",             "options": {                 "token": "passed_time",                 "defaultValue": "$passed_time.latest$,$passed_time.earliest$"             },             "title": "Global Time Range"         }     },   I have also remove the whole input , and only captured the token at :    "defaults": {         "dataSources": {             "ds.search": {                 "options": {                     "queryParameters": {                         "latest": "$passed_time.latest$",                         "earliest": "$passed_time.earliest$"                     }                 }             }         } Just so puzzled, why my token values are not passed to the source , but fine on the text box.  kindly advice 
Hi @livehybrid, Thanks for providing the information. With using rsync, tested with All-in-One and it is working. But in a situation whereby there's a cluster involved for the indexer/search head ... See more...
Hi @livehybrid, Thanks for providing the information. With using rsync, tested with All-in-One and it is working. But in a situation whereby there's a cluster involved for the indexer/search head and it is in a separate network/location. Will the same method work?? or there's another method to follow?
Hi @isoutamo,  Thanks for providing some of the article. For All-in-One, tested with using the rsync. Everything went quite smooth.  But in a situation whereby there's a cluster involved for the i... See more...
Hi @isoutamo,  Thanks for providing some of the article. For All-in-One, tested with using the rsync. Everything went quite smooth.  But in a situation whereby there's a cluster involved for the indexer/search head and it is in a separate network/location. Will the same method work?? or there's another method to follow?
Trying different token name on both dashabord doesnt work either.
Hi, The package of Splunk Otel has been update. But during the update, the configuration file has rename to *.newrpm and create a new one, like a default configuration file. I have rename the saved... See more...
Hi, The package of Splunk Otel has been update. But during the update, the configuration file has rename to *.newrpm and create a new one, like a default configuration file. I have rename the saved file *.newrpm to *.yaml and restart with success the service. Thanks for your help Olivier
First, I suspect that you meant the input looks like key values AdditionalInfo DeviceID DeviceType OS user has removed device with id "alpha_numeric_field" in area "alpha_numeric_f... See more...
First, I suspect that you meant the input looks like key values AdditionalInfo DeviceID DeviceType OS user has removed device with id "alpha_numeric_field" in area "alpha_numeric_field" for user "alpha_numeric_field". alpha_numeric_field mobile_device Windows Second, I have a question about the origin of "key" and "values".  Could they come from a structure such as JSON?  Maybe there is a better opportunity than at the end of processing. Third, I suspect that you meant "the output would be" AdditionalInfo DeviceID DeviceType OS user has removed device with id "alpha_numeric_field" in area "alpha_numeric_field" for user "alpha_numeric_field". alpha_numeric_field mobile_device Windows   Finally, if your Splunk is 8.1 or later, you can use JSON functions and the multivalue mode of foreach to do the job: | eval idx = mvrange(0, mvcount(key)) | eval keyvalue = json_object() | foreach idx mode=multivalue [eval keyvalue = json_set(keyvalue, mvindex(key, <<ITEM>>), mvindex(values, <<ITEM>>))] | spath input=keyvalue | fields - idx key values keyvalue Here is an emulation for you to play with and compare with real data | makeresults format=csv data="key,values AdditionalInfo,user has removed device with id \"alpha_numeric_field\" in area \"alpha_numeric_field\" for user \"alpha_numeric_field\". DeviceID,alpha_numeric_field DeviceType,mobile_device OS,Windows" | stats list(*) as * ``` data emulation above ```