All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Cheng2Ready , at first check that the date format is the same both in events (after eval command) and in lookup, then try inserting in the lookup a test date that you're sure to have events. A... See more...
Hi @Cheng2Ready , at first check that the date format is the same both in events (after eval command) and in lookup, then try inserting in the lookup a test date that you're sure to have events. At least, don't use this condition in the alert: put the condition inside the alert search and not in the alert definition, in other words: in alert definition use results>0 and use this search: index=xxxxxx | eval HDate=strftime(_time,"%Y-%m-%d") | search NOT [ | inputlookup Date_Test.csv | fields HDate ] | stats count | append [ | makeresults | eval count=0 | fields count) | stats sum(count) AS total | where total>1 OR total=0 in ths way, removing the final condition, you can check your search results before the alerting. Ciao. Giuseppe
What I have read and understand based on many discussions is that even there is several pipelines those all share only one input part. I have gotten understanding that there is one input and pipelines... See more...
What I have read and understand based on many discussions is that even there is several pipelines those all share only one input part. I have gotten understanding that there is one input and pipelines start after that and it’s possible that this inputs will be blocked which also blocks other pipelines.
This is what I have setup index=xxxxxx | eval HDate=strftime(_time,"%Y-%m-%d") | search NOT [ | inputlookup Date_Test.csv | fields HDate ] The search always returns 1 event. The Alert Co... See more...
This is what I have setup index=xxxxxx | eval HDate=strftime(_time,"%Y-%m-%d") | search NOT [ | inputlookup Date_Test.csv | fields HDate ] The search always returns 1 event. The Alert Condition is: if it see's  more than 1 event OR  0 event trigger an alert. Issue I'm facing now is on the Lookup table dates Lets say I have it setup on April 14th in my Lookup table file "Date_Test.csv" On April the 14th Still fired an alert, I'm not sure if its because it see 0 events ?  It suppose to Mute on that day. any insight and help would be much appreciated  
@ssuluguri  If you enables Dynamic Data Self-Storage (DDSS) to export your aged ingested data, the oldest data is moved to the Amazon S3 account in the same region as their Splunk Cloud deployment b... See more...
@ssuluguri  If you enables Dynamic Data Self-Storage (DDSS) to export your aged ingested data, the oldest data is moved to the Amazon S3 account in the same region as their Splunk Cloud deployment before it is deleted from the index. You are responsible for AWS payments for the use of the Amazon S3 account. When data is deleted from the index, it is no longer searchable by Splunk Cloud. Customers are responsible for managing DDSS and a non Splunk Cloud stack for searching archived data. This is a manual process and customers will require a professional services engagement. https://docs.splunk.com/Documentation/SplunkCloud/latest/Admin/DataSelfStorage  NOTE: DDSS Data Egress - No limit - Export 1 TB/hr; Must be in the same region as the indexing tier https://www.splunk.com/en_us/blog/platform/dynamic-data-data-retention-options-in-splunk-cloud.html 
Hi @ssuluguri  You mention that the customer wants their data to be readable after moving off Splunk Cloud, does this mean it would need to be in raw format?  The easiest way to get data out of Spl... See more...
Hi @ssuluguri  You mention that the customer wants their data to be readable after moving off Splunk Cloud, does this mean it would need to be in raw format?  The easiest way to get data out of Splunk Cloud from my experience is to use Dynamic Data: Self-Storage (DDSS) - storing frozen buckets into the customer's S3 bucket. Once here you can do a number of things with it: 1) Thaw it out into a Splunk instance with mininal/free license (you wont be ingesting new data) 2) Extract the journal file from the DDSS buckets leaving you with the raw data. Would the customer be willing to have a small Splunk instance with their archived data in for easy searching? If it helps, Ive got a repo at https://github.com/livehybrid/ddss-restore which is primarily for converting DDSS back into SmartStore buckets for use with a semi-offline (in-case of emergencies-style) data storage.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing   
I just needed some help from Splunk regarding a request from our clients. So, a client is migrating from Splunk to Sentinel but has about 25 TBs of data still on Splunk cloud which they want to keep ... See more...
I just needed some help from Splunk regarding a request from our clients. So, a client is migrating from Splunk to Sentinel but has about 25 TBs of data still on Splunk cloud which they want to keep for at least a year. The data should be readable for investigations and compliance purposes. I know the client might need Splunk professional services for all options mentioned above since it's Splunk Cloud but what would be the best and most cost-effective solution for them? Can you please help and advise what could be the best way forward.
The AppDynamics Machine Agent supports remediation scripts, which allow you to define automated or manual actions in response to specific alerts or conditions. These scripts can be triggered when a p... See more...
The AppDynamics Machine Agent supports remediation scripts, which allow you to define automated or manual actions in response to specific alerts or conditions. These scripts can be triggered when a predefined health rule violation occurs, enabling proactive responses to issues in your environment. Below is an overview of how remediation scripts work in the Machine Agent and how to configure and use them:   What Are Remediation Scripts? Remediation scripts are custom scripts (written in languages like Shell, Python, Batch, or PowerShell) that are executed by the Machine Agent when triggered by Health rule violations These scripts can perform various actions, such as restarting services, freeing up memory, or notifying teams.   Use Cases for Remediation Scripts include: 1. Restarting Services or Applications: • Automatically restart a failed service (e.g., web server or database). 2. Clearing Logs or Temporary Files: • Free up disk space by removing unnecessary files. 3. Scaling Infrastructure: • Trigger an API call to scale up/down infrastructure (e.g., AWS, Kubernetes). 4. Sending Custom Notifications: • Send notifications to external systems like Slack, PagerDuty, or email. 5. Custom Troubleshooting Steps: • Collect diagnostics like thread dumps, heap dumps, or system logs. Step-by-step guide The steps to configure a remediation script are documented here → https://docs.appdynamics.com/appd/24.x/25.4/en/splunk-appdynamics-essentials/alert-and-respond/actions/remediation-actions. Practical example:  Use case: enable debug-level or trace-level logs on HR violation for troubleshooting purposes.  Setting the health rule.           Docs:https://docs.appdynamics.com/appd/24.x/25.4/en/splunk-appdynamics-essentials/alert-and-respond/configure-health-rules           1. Select HR type    Remediation actions are not available for servers. You can create and run a remediation action for a health rule with application, tier, or node as an affected entity. Ensure that you select the same entities when you define the Object Scope for the associated policy.           2. Affects Nodes            3. Select specific Nodes.            4. Select one or multiple nodes          5. Add conditions for the HR          6. Select a single metric or Metrics expression (here we select Single Metric value (cpu|%Busy))   2. Setting the action     Docs: https://docs.appdynamics.com/appd/24.x/25.4/en/splunk-appdynamics-essentials/alert-and-respond/actions/remediation-actions#id-.RemediationActionsv24.1-RemediationExample     1. Set the action name.     2. The path to the trace.sh file    3. The path to log files    4. Script timeout in minutes set to 5    5. Set email for approval (if required) and Save. 3. Setting the policies to trigger the action     Docs: https://docs.appdynamics.com/appd/24.x/25.4/en/splunk-appdynamics-essentials/alert-and-respond/policies/configure-policies     1. Policy name     2. Enabled     3. Select HR violation event.     4. Select specific Health Rules.     5. Selected the configured Health Rules.     6. Select specific objects.     7. From Tiers and Nodes.     8. Select Nodes.     9. Specific nodes.     10 Selected one or multiple nodes.     11. Add the action to be executed. On the agent's side. Create the trace.sh script and place it in the /local-scripts/ directory   #!/bin/bash   # Define the target file TARGET_FILE="matest/conf/logging/log4j.xml"   # Backup the original file cp "$TARGET_FILE" "${TARGET_FILE}.backup"   # Function to update the logging level update_logging_level() {     local level=$1     echo "Updating logging level to '$level'..."           # Use sed to change all loggers with level="info" to the desired level     sed -i "s/level=\"info\"/level=\"$level\"/g" "$TARGET_FILE"       if [ $? -eq 0 ]; then         echo "Logging level successfully updated to '$level'."     else         echo "Failed to update logging level."         exit 1     fi }   # Set the logging level to 'trace' update_logging_level "trace"   # Wait for 10 minutes (600 seconds) echo "Waiting for 10 minutes..." sleep 600   # Revert the logging level back to 'info' update_logging_level "info"   echo "Logging level reverted to 'info'."   When the action is triggered, the script will change the log level from info to debug and revert the change after 10 minutes. Prerequisites for Local Script Actions The Machine Agent must be installed running on the host on which the script executes. To see a list of installed Machine Agents for your application, click View machines with machine-agent installed in the bottom left corner of the remediation script configuration window. To be able to run remediation scripts, the Machine Agent must be connected to a SaaS Controller via SSL. Remediation script execution is disabled if the Machine Agent connects to a SaaS Controller on an unsecured (non-SSL) HTTP connection. The Machine Agent and the APM agent must be on the same host. The Machine Agent OS user must have full permissions to the script file and the log files generated by the script and/or its associated child processes. The script must be placed in <agent install directory>\local-scripts. The script must be available on the host on which it executes. Processes spawned from the scripts must be daemon processes.  
Thank you for your response! Unfortunately, that did not work.
Hi Livehybrid This pipelines configurations came from a splunk PS. I understand the risk to loose 1 Gb of data if this forwarder goes down ! Thank you ! About the datas, all data is coming from up... See more...
Hi Livehybrid This pipelines configurations came from a splunk PS. I understand the risk to loose 1 Gb of data if this forwarder goes down ! Thank you ! About the datas, all data is coming from upstreams forwarder. This is raw data (Firewall, DNS ...) and structured data as json entries. The connectivity between f<dr and indexer is VPN (350 Mbps throughput for each forwarder).   A last point, i missed to wrote in my first post, We are in "y" double output  because we are mooving from one platform to a new one and during 1 month, we have to send data over the two platforms.      
Hi isoutamo,    In the splunk doc, the is the description for in an out pipelines :  "When you enable multiple pipeline sets on a forwarder, each pipeline handles both data input and output."   ... See more...
Hi isoutamo,    In the splunk doc, the is the description for in an out pipelines :  "When you enable multiple pipeline sets on a forwarder, each pipeline handles both data input and output."   Are you sure that only one line pipeline is affectied to input processor ?    
Hello @livehybrid , Thanks for your response. Below are the answers to your questions. Have you got yourAssets and Identities lookups configured in ES? --> Yes we have configured it and it is wor... See more...
Hello @livehybrid , Thanks for your response. Below are the answers to your questions. Have you got yourAssets and Identities lookups configured in ES? --> Yes we have configured it and it is working as expected for single value fields which contains assets and/or identities. It just dont work properly (or may be this is the intended behavior) for field which contains assets and/or identities as multivalue fields. Regarding how to actually implement Defender alerts, this really depends on your use-cases and what you are wanting to achieve. Do you want an incident for every alert in Defender, or based on thresholds etc?  --> I want to have defender incident in Splunk as a finding. And as you know, defender incident is a collection of alerts and hence it contains a collection of identities and assets in a single field. I just want to know how can I enrich these multi-value assets and identities fields (coming from defender) using Splunk ES identities lookup.  Have you looked into the Splunk Enterprise Security Content Update app or Splunk Security Essentials? These contain a bunch of detections which you might be able to leverage. Defender Alerts are specifically listed as a datasource: https://research.splunk.com/sources/91738e9e-d112-41c9-b91b-e5868d8993d7/ --> I am not looking for search as I already have it and this one you mentioned is targetting advanced hunting data. I get the data from standard Microsoft security addon which you can hook into defender api to fetch defender incidents. I am specifically looking for ideas and suggestions on how multivalue identity field works in Splunk ES.   Hope this answers the questions you were having.   Thanks
Hi @_olivier_  Increasing the queue size isnt a solution for this problem, all you are doing is introducing a risk that if that host fails that you will lose 1GB of buffered/queued date (per pipelin... See more...
Hi @_olivier_  Increasing the queue size isnt a solution for this problem, all you are doing is introducing a risk that if that host fails that you will lose 1GB of buffered/queued date (per pipeline!), this should only be used for smoothing out short bursts of data. As @isoutamo has mentioned - having so many pipelines might not help you here, I believe an input will only make use of a single pipeline (whereas a pipeline can be be applied to multiple inputs).  What kind of data are you sending to your UFs? Im suprised that the parsing queue is filling, as I wouldnt expect the UF to do much parsing. There is a change that its struggling to get the data out to the indexers, what is the connectivity like between the UFs and the indexers?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Looking back it looks like I might have pasted in the wrong bit as I didnt add the "in <field>" How about this? EXTRACT-requestId = (?<field_requestId>[a-f0-9\-]{36}) in requestId  Did this answ... See more...
Looking back it looks like I might have pasted in the wrong bit as I didnt add the "in <field>" How about this? EXTRACT-requestId = (?<field_requestId>[a-f0-9\-]{36}) in requestId  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @livehybrid  sorry for the late response. Unfortunately, the answer you provided does not work. Still no extraction and same behaviour. Can you help me split and create the correct splunk fi... See more...
Hi @livehybrid  sorry for the late response. Unfortunately, the answer you provided does not work. Still no extraction and same behaviour. Can you help me split and create the correct splunk field? Thanks in advance, Matt
Hi @vikashumble  Have you got yourAssets and Identities lookups configured in ES? Ensure you have enabled Assets and Identities automatic enrichment for the relevant sourcetypes (or all sourcetypes... See more...
Hi @vikashumble  Have you got yourAssets and Identities lookups configured in ES? Ensure you have enabled Assets and Identities automatic enrichment for the relevant sourcetypes (or all sourcetypes) - See https://docs.splunk.com/Documentation/ES/8.0.2/Admin/ManageAssetIdentityToEnrichNotables#:~:text=Select%20Save.-,Turn%20on%20asset%20and%20identity%20enrichment%20for%20all%20sourcetypes,-Turn%20on%20correlation  See https://docs.splunk.com/Documentation/ES/8.0.40/Admin/ManageIdentityLookupConfigPolicy for more info on how to add/manage identity lookups Regarding how to actually implement Defender alerts, this really depends on your use-cases and what you are wanting to achieve. Do you want an incident for every alert in Defender, or based on thresholds etc?  Have you looked into the Splunk Enterprise Security Content Update app or Splunk Security Essentials? These contain a bunch of detections which you might be able to leverage. Defender Alerts are specifically listed as a datasource: https://research.splunk.com/sources/91738e9e-d112-41c9-b91b-e5868d8993d7/    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi Yes - this is correct. resourceSpans{}.scopeSpans{}.spans{}.attributes{}.value.stringValue is the path that the token can change on. The screenshot below shows me using it and getting back 34 e... See more...
Hi Yes - this is correct. resourceSpans{}.scopeSpans{}.spans{}.attributes{}.value.stringValue is the path that the token can change on. The screenshot below shows me using it and getting back 34 events. If I run it with =*, I get 225   If I run it without the filter, I get 294.  So the issue is when I put in * I want to get 294, as there are other parts to the data that I need to look at.      
Hi I think we are close, and thanks for your efforts. A couple of points This is only a problem for | search resourceSpans{}.scopeSpans{}.spans{}.attributes{}.value.stringValue = "$Token_Mr_jobId$... See more...
Hi I think we are close, and thanks for your efforts. A couple of points This is only a problem for | search resourceSpans{}.scopeSpans{}.spans{}.attributes{}.value.stringValue = "$Token_Mr_jobId$". This is the token that I am looking at. Token_Mr_jobId - Can be a dynamic list, so more than the 2 I gave in the example. So the question is, how to pass the dynamic selection? <input type="dropdown" token="Token_Mr_jobId"> <label>JobId</label> <fieldForLabel>mr_jobId</fieldForLabel> <fieldForValue>mr_jobId</fieldForValue> <search> <query>host="$Host_Token$" index="murex_logs" sourcetype="Market_Risk_DT" "**mr_strategy**" "resourceSpans{}.resource.attributes{}.value.stringValue"="$TOKEN_Service_Namespace$" | fields - resourceSpans{}.* | spath path=resourceSpans{} | mvexpand resourceSpans{} | spath input=resourceSpans{} path=scopeSpans{} | fields - resourceSpans{} | mvexpand scopeSpans{} | spath input=scopeSpans{} path=spans{} | fields - scopeSpans{} | mvexpand spans{} | where match('spans{}', "mr_batchId") | spath input=spans{} path=attributes{} output=attributes | foreach mr_batchId mr_jobId [ eval &lt;&lt;FIELD&gt;&gt; = mvappend(&lt;&lt;FIELD&gt;&gt;, mvmap(attributes, if(spath(attributes, "key") != "&lt;&lt;FIELD&gt;&gt;", null(), spath(attributes, "value")))), &lt;&lt;FIELD&gt;&gt; = coalesce(spath(&lt;&lt;FIELD&gt;&gt;, "doubleValue"), spath(&lt;&lt;FIELD&gt;&gt;, "stringValue"))] | dedup _time mr_batchId ``` the above is key logic. If there is any doubt, you can also use | dedup _time mr_batchId mr_batch_compute_cpu_time mr_batch_compute_time ``` | table _time mr_batchId mr_batch_compute_cpu_time mr_batch_compute_time mr_batch_load_cpu_time mr_batch_load_time mr_strategy mr_jobId | table mr_jobId | dedup mr_jobId</query> <earliest>$time_token.earliest$</earliest> <latest>$time_token.latest$</latest> </search> <change> <condition match="$Token_Mr_jobId$ != &quot;*&quot;"> <set token="TOKEN_Strategy">ON</set> <set token="TOKEN_TRACEID">*</set> </condition> <condition match="$Token_Mr_jobId$ = &quot;*&quot;"> <unset token="TOKEN_Strategy"></unset> <set token="TOKEN_TRACEID">*</set> </condition> <condition> <set token="TOKEN_TRACEID">*</set> </condition> </change> <choice value="*">*</choice> <default>*</default> </input> Also, to add - Here are 2 data sets. 1 with resourceSpans{}.scopeSpans{}.spans{}.attributes{}.value.stringValue and one with out. With | search resourceSpans{}.scopeSpans{}.spans{}.attributes{}.value.stringValue = "CONSO_ABAQ | 31/03/2016 | 21" {"resourceSpans":[{"resource":{"attributes":[{"key":"telemetry.sdk.language","value":{"stringValue":"cpp"}},{"key":"service.name","value":{"stringValue":"MXMARKETRISK.ENGINE.MX"}},{"key":"service.namespace","value":{"stringValue":"MXMARKETRISK.SERVICE"}},{"key":"process.pid","value":{"intValue":"604252"}},{"key":"service.instance.id","value":{"stringValue":"003nhhkz"}},{"key":"telemetry.sdk.name","value":{"stringValue":"opentelemetry"}},{"key":"telemetry.sdk.version","value":{"stringValue":"1.12.0"}},{"key":"mx.env","value":{"stringValue":"dell945srv:13003"}}]},"scopeSpans":[{"scope":{"name":"murex::observability_otel_backend::tracing","version":"v1"},"spans":[{"traceId":"10731f4b1d19380ceb33ae33672dbd5f","spanId":"0392a58e2dfaaa4c","parentSpanId":"ebce3b37999c2ea1","name":"scenario_reaction","kind":1,"startTimeUnixNano":"1747148775503846985","endTimeUnixNano":"1747148782361058175","attributes":[{"key":"market_risk_span","value":{"stringValue":"true"}}],"status":{}},{"traceId":"10731f4b1d19380ceb33ae33672dbd5f","spanId":"ebce3b37999c2ea1","parentSpanId":"825cbaedeb509365","name":"scenario_apply","kind":1,"startTimeUnixNano":"1747148775477950524","endTimeUnixNano":"1747148782362084106","attributes":[{"key":"market_risk_span","value":{"stringValue":"true"}},{"key":"mr_scenario","value":{"stringValue":"10"}}],"status":{}},{"traceId":"10731f4b1d19380ceb33ae33672dbd5f","spanId":"04307bd9c64e20e8","parentSpanId":"825cbaedeb509365","name":"structured_position_evaluation","kind":1,"startTimeUnixNano":"1747148782362177082","endTimeUnixNano":"1747148782379867824","attributes":[{"key":"market_risk_span","value":{"stringValue":"true"}},{"key":"mr_scenario","value":{"stringValue":"10"}}],"status":{}},{"traceId":"10731f4b1d19380ceb33ae33672dbd5f","spanId":"d2abbf63ac87acb4","parentSpanId":"825cbaedeb509365","name":"position_evaluation","kind":1,"startTimeUnixNano":"1747148782380422079","endTimeUnixNano":"1747148782509071609","attributes":[{"key":"market_risk_span","value":{"stringValue":"true"}},{"key":"mr_scenario","value":{"stringValue":"10"}}],"status":{}},{"traceId":"10731f4b1d19380ceb33ae33672dbd5f","spanId":"cc80374715a2e755","parentSpanId":"431a2a6341ac4120","name":"scenario_reaction","kind":1,"startTimeUnixNano":"1747148782510724546","endTimeUnixNano":"1747148782599301641","attributes":[{"key":"market_risk_span","value":{"stringValue":"true"}}],"status":{}},{"traceId":"10731f4b1d19380ceb33ae33672dbd5f","spanId":"431a2a6341ac4120","parentSpanId":"825cbaedeb509365","name":"scenario_restore","kind":1,"startTimeUnixNano":"1747148782509167483","endTimeUnixNano":"1747148782605479850","attributes":[{"key":"market_risk_span","value":{"stringValue":"true"}},{"key":"mr_scenario","value":{"stringValue":"10"}}],"status":{}},{"traceId":"10731f4b1d19380ceb33ae33672dbd5f","spanId":"825cbaedeb509365","parentSpanId":"8e2a92d0a40f203b","name":"scenario_all_apply","kind":1,"startTimeUnixNano":"1747148711449995255","endTimeUnixNano":"1747148782623591981","attributes":[{"key":"market_risk_span","value":{"stringValue":"true"}},{"key":"mr_scenario_nb","value":{"stringValue":"10"}}],"status":{}},{"traceId":"10731f4b1d19380ceb33ae33672dbd5f","spanId":"8e2a92d0a40f203b","parentSpanId":"8e5190bbe86bdaff","name":"fullreval_task","kind":1,"startTimeUnixNano":"1747148638253233246","endTimeUnixNano":"1747148782639890403","attributes":[{"key":"market_risk_span","value":{"stringValue":"true"}},{"key":"mr_batchId","value":{"stringValue":"40"}},{"key":"mr_batchType","value":{"stringValue":"Full Revaluation"}},{"key":"mr_bucketName","value":{"stringValue":""}},{"key":"mr_jobDomain","value":{"stringValue":"Market Risk"}},{"key":"mr_jobId","value":{"stringValue":"CONSO_ABAQ | 31/03/2016 | 21"}},{"key":"mr_strategy","value":{"stringValue":"typo_Callable Bond"}},{"key":"mr_uuid","value":{"stringValue":"7dcdf03d-9dd0-42f6-b0f4-e2508283ff44"}},{"key":"mrb_batch_affinity","value":{"stringValue":"CONSO_ABAQ_run_Batch|CONSO_ABAQ|2016/03/31|21_FullReval0_00040"}},{"key":"mr_batch_compute_cpu_time","value":{"doubleValue":71.000477}},{"key":"mr_batch_compute_time","value":{"doubleValue":71.351}},{"key":"mr_batch_load_cpu_time","value":{"doubleValue":61.569109000000005}},{"key":"mr_batch_load_time","value":{"doubleValue":71.597}},{"key":"mr_batch_status","value":{"stringValue":"WARNING"}},{"key":"mr_batch_total_cpu_time","value":{"doubleValue":133.924506}},{"key":"mr_batch_total_time","value":{"doubleValue":144.375}}],"status":{}}]}]}]} With | search resourceSpans{}.scopeSpans{}.spans{}.attributes{}.value.stringValue = * (But I understand your great idea, that we will just put in "|search *") {"resourceSpans":[{"resource":{"attributes":[{"key":"process.pid","value":{"intValue":"600146"}},{"key":"service.instance.id","value":{"stringValue":"003nhhk3"}},{"key":"service.name","value":{"stringValue":"LAUNCHERMXMARKETRISK_MPC"}},{"key":"service.namespace","value":{"stringValue":"LAUNCHER"}},{"key":"telemetry.sdk.language","value":{"stringValue":"java"}},{"key":"telemetry.sdk.name","value":{"stringValue":"opentelemetry"}},{"key":"telemetry.sdk.version","value":{"stringValue":"1.34.0"}},{"key":"mx.env","value":{"stringValue":"dell945srv:13003"}}]},"scopeSpans":[{"scope":{"name":"mx-traces-api","version":"1.0.0"},"spans":[{"traceId":"10731f4b1d19380ceb33ae33672dbd5f","spanId":"cbf88ed07b403b48","parentSpanId":"3cfc7d85786b676b","name":"createSubmission","kind":1,"startTimeUnixNano":"1747152946314481406","endTimeUnixNano":"1747152946314775297","status":{}},{"traceId":"10731f4b1d19380ceb33ae33672dbd5f","spanId":"8ff7fabcab4b12d0","parentSpanId":"3cfc7d85786b676b","name":"createSubmission","kind":1,"startTimeUnixNano":"1747152946353054099","endTimeUnixNano":"1747152946353187644","status":{}},{"traceId":"10731f4b1d19380ceb33ae33672dbd5f","spanId":"4b14e49df1e1ffd8","parentSpanId":"3cfc7d85786b676b","name":"createSubmission","kind":1,"startTimeUnixNano":"1747152946474942393","endTimeUnixNano":"1747152946475042609","status":{}},{"traceId":"10731f4b1d19380ceb33ae33672dbd5f","spanId":"169b89bf118931d8","parentSpanId":"3cfc7d85786b676b","name":"createSubmission","kind":1,"startTimeUnixNano":"1747152946488875310","endTimeUnixNano":"1747152946488933120","status":{}}]}]}]}  
Hi @zapping575  Nothing particular is jumping out at me, in your outputs.conf for UF, the sslRootCAPath is deprecated and instead would use 'server.conf/[sslConfig]/sslRootCAPath' but you have set t... See more...
Hi @zapping575  Nothing particular is jumping out at me, in your outputs.conf for UF, the sslRootCAPath is deprecated and instead would use 'server.conf/[sslConfig]/sslRootCAPath' but you have set this too. I would try the following: Check the certs can be read okay: openssl x509 -in /path/to/certificate.crt -text -noout Check which CA's the indexer will accept certs from: Accepted CAs are displayed under the "Acceptable client certificate CA names" heading. openssl s_client -connect <indexer_host>:9997 -showcerts -tls1_2   Try and connect to your indexer from UF using the certs with openssl: openssl s_client \ -connect <indexer_host>:9997 \ -CAfile /mnt/certs/cert.pem \ -cert /mnt/certs/cert0.pem \ -key /mnt/certs/cert0.pem \ -tls1_2 \ -state -showcerts  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @braxton839  The config you have looks good, but I think the issue here is that the sourcetype you are referencing (juniper:junos:firewall:structured) is actually being set during the parsing pro... See more...
Hi @braxton839  The config you have looks good, but I think the issue here is that the sourcetype you are referencing (juniper:junos:firewall:structured) is actually being set during the parsing process and therefore wouldnt be picked up by your custom props/transforms. This is overwriting the existing sourcetype by the transform "force_sourcetype_for_junos_firewall_structured". Instead you could try the following: == props.conf == [source::....junos_fw] TRANSFORMS-null= setnull [juniper] TRANSFORMS-null= setnull == transforms.conf == # Filter juniper teardown logs to nullqueue [setnull] REGEX = RT_FLOW_SESSION_CLOSE DEST_KEY = queue FORMAT = nullQueue    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
You can have only one sslRootCAPath in your node. Just put all different ca certs into this one file.