All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Livehybrid This pipelines configurations came from a splunk PS. I understand the risk to loose 1 Gb of data if this forwarder goes down ! Thank you ! About the datas, all data is coming from up... See more...
Hi Livehybrid This pipelines configurations came from a splunk PS. I understand the risk to loose 1 Gb of data if this forwarder goes down ! Thank you ! About the datas, all data is coming from upstreams forwarder. This is raw data (Firewall, DNS ...) and structured data as json entries. The connectivity between f<dr and indexer is VPN (350 Mbps throughput for each forwarder).   A last point, i missed to wrote in my first post, We are in "y" double output  because we are mooving from one platform to a new one and during 1 month, we have to send data over the two platforms.      
Hi isoutamo,    In the splunk doc, the is the description for in an out pipelines :  "When you enable multiple pipeline sets on a forwarder, each pipeline handles both data input and output."   ... See more...
Hi isoutamo,    In the splunk doc, the is the description for in an out pipelines :  "When you enable multiple pipeline sets on a forwarder, each pipeline handles both data input and output."   Are you sure that only one line pipeline is affectied to input processor ?    
Hello @livehybrid , Thanks for your response. Below are the answers to your questions. Have you got yourAssets and Identities lookups configured in ES? --> Yes we have configured it and it is wor... See more...
Hello @livehybrid , Thanks for your response. Below are the answers to your questions. Have you got yourAssets and Identities lookups configured in ES? --> Yes we have configured it and it is working as expected for single value fields which contains assets and/or identities. It just dont work properly (or may be this is the intended behavior) for field which contains assets and/or identities as multivalue fields. Regarding how to actually implement Defender alerts, this really depends on your use-cases and what you are wanting to achieve. Do you want an incident for every alert in Defender, or based on thresholds etc?  --> I want to have defender incident in Splunk as a finding. And as you know, defender incident is a collection of alerts and hence it contains a collection of identities and assets in a single field. I just want to know how can I enrich these multi-value assets and identities fields (coming from defender) using Splunk ES identities lookup.  Have you looked into the Splunk Enterprise Security Content Update app or Splunk Security Essentials? These contain a bunch of detections which you might be able to leverage. Defender Alerts are specifically listed as a datasource: https://research.splunk.com/sources/91738e9e-d112-41c9-b91b-e5868d8993d7/ --> I am not looking for search as I already have it and this one you mentioned is targetting advanced hunting data. I get the data from standard Microsoft security addon which you can hook into defender api to fetch defender incidents. I am specifically looking for ideas and suggestions on how multivalue identity field works in Splunk ES.   Hope this answers the questions you were having.   Thanks
Hi @_olivier_  Increasing the queue size isnt a solution for this problem, all you are doing is introducing a risk that if that host fails that you will lose 1GB of buffered/queued date (per pipelin... See more...
Hi @_olivier_  Increasing the queue size isnt a solution for this problem, all you are doing is introducing a risk that if that host fails that you will lose 1GB of buffered/queued date (per pipeline!), this should only be used for smoothing out short bursts of data. As @isoutamo has mentioned - having so many pipelines might not help you here, I believe an input will only make use of a single pipeline (whereas a pipeline can be be applied to multiple inputs).  What kind of data are you sending to your UFs? Im suprised that the parsing queue is filling, as I wouldnt expect the UF to do much parsing. There is a change that its struggling to get the data out to the indexers, what is the connectivity like between the UFs and the indexers?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Looking back it looks like I might have pasted in the wrong bit as I didnt add the "in <field>" How about this? EXTRACT-requestId = (?<field_requestId>[a-f0-9\-]{36}) in requestId  Did this answ... See more...
Looking back it looks like I might have pasted in the wrong bit as I didnt add the "in <field>" How about this? EXTRACT-requestId = (?<field_requestId>[a-f0-9\-]{36}) in requestId  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @livehybrid  sorry for the late response. Unfortunately, the answer you provided does not work. Still no extraction and same behaviour. Can you help me split and create the correct splunk fi... See more...
Hi @livehybrid  sorry for the late response. Unfortunately, the answer you provided does not work. Still no extraction and same behaviour. Can you help me split and create the correct splunk field? Thanks in advance, Matt
Hi @vikashumble  Have you got yourAssets and Identities lookups configured in ES? Ensure you have enabled Assets and Identities automatic enrichment for the relevant sourcetypes (or all sourcetypes... See more...
Hi @vikashumble  Have you got yourAssets and Identities lookups configured in ES? Ensure you have enabled Assets and Identities automatic enrichment for the relevant sourcetypes (or all sourcetypes) - See https://docs.splunk.com/Documentation/ES/8.0.2/Admin/ManageAssetIdentityToEnrichNotables#:~:text=Select%20Save.-,Turn%20on%20asset%20and%20identity%20enrichment%20for%20all%20sourcetypes,-Turn%20on%20correlation  See https://docs.splunk.com/Documentation/ES/8.0.40/Admin/ManageIdentityLookupConfigPolicy for more info on how to add/manage identity lookups Regarding how to actually implement Defender alerts, this really depends on your use-cases and what you are wanting to achieve. Do you want an incident for every alert in Defender, or based on thresholds etc?  Have you looked into the Splunk Enterprise Security Content Update app or Splunk Security Essentials? These contain a bunch of detections which you might be able to leverage. Defender Alerts are specifically listed as a datasource: https://research.splunk.com/sources/91738e9e-d112-41c9-b91b-e5868d8993d7/    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi Yes - this is correct. resourceSpans{}.scopeSpans{}.spans{}.attributes{}.value.stringValue is the path that the token can change on. The screenshot below shows me using it and getting back 34 e... See more...
Hi Yes - this is correct. resourceSpans{}.scopeSpans{}.spans{}.attributes{}.value.stringValue is the path that the token can change on. The screenshot below shows me using it and getting back 34 events. If I run it with =*, I get 225   If I run it without the filter, I get 294.  So the issue is when I put in * I want to get 294, as there are other parts to the data that I need to look at.      
Hi I think we are close, and thanks for your efforts. A couple of points This is only a problem for | search resourceSpans{}.scopeSpans{}.spans{}.attributes{}.value.stringValue = "$Token_Mr_jobId$... See more...
Hi I think we are close, and thanks for your efforts. A couple of points This is only a problem for | search resourceSpans{}.scopeSpans{}.spans{}.attributes{}.value.stringValue = "$Token_Mr_jobId$". This is the token that I am looking at. Token_Mr_jobId - Can be a dynamic list, so more than the 2 I gave in the example. So the question is, how to pass the dynamic selection? <input type="dropdown" token="Token_Mr_jobId"> <label>JobId</label> <fieldForLabel>mr_jobId</fieldForLabel> <fieldForValue>mr_jobId</fieldForValue> <search> <query>host="$Host_Token$" index="murex_logs" sourcetype="Market_Risk_DT" "**mr_strategy**" "resourceSpans{}.resource.attributes{}.value.stringValue"="$TOKEN_Service_Namespace$" | fields - resourceSpans{}.* | spath path=resourceSpans{} | mvexpand resourceSpans{} | spath input=resourceSpans{} path=scopeSpans{} | fields - resourceSpans{} | mvexpand scopeSpans{} | spath input=scopeSpans{} path=spans{} | fields - scopeSpans{} | mvexpand spans{} | where match('spans{}', "mr_batchId") | spath input=spans{} path=attributes{} output=attributes | foreach mr_batchId mr_jobId [ eval &lt;&lt;FIELD&gt;&gt; = mvappend(&lt;&lt;FIELD&gt;&gt;, mvmap(attributes, if(spath(attributes, "key") != "&lt;&lt;FIELD&gt;&gt;", null(), spath(attributes, "value")))), &lt;&lt;FIELD&gt;&gt; = coalesce(spath(&lt;&lt;FIELD&gt;&gt;, "doubleValue"), spath(&lt;&lt;FIELD&gt;&gt;, "stringValue"))] | dedup _time mr_batchId ``` the above is key logic. If there is any doubt, you can also use | dedup _time mr_batchId mr_batch_compute_cpu_time mr_batch_compute_time ``` | table _time mr_batchId mr_batch_compute_cpu_time mr_batch_compute_time mr_batch_load_cpu_time mr_batch_load_time mr_strategy mr_jobId | table mr_jobId | dedup mr_jobId</query> <earliest>$time_token.earliest$</earliest> <latest>$time_token.latest$</latest> </search> <change> <condition match="$Token_Mr_jobId$ != &quot;*&quot;"> <set token="TOKEN_Strategy">ON</set> <set token="TOKEN_TRACEID">*</set> </condition> <condition match="$Token_Mr_jobId$ = &quot;*&quot;"> <unset token="TOKEN_Strategy"></unset> <set token="TOKEN_TRACEID">*</set> </condition> <condition> <set token="TOKEN_TRACEID">*</set> </condition> </change> <choice value="*">*</choice> <default>*</default> </input> Also, to add - Here are 2 data sets. 1 with resourceSpans{}.scopeSpans{}.spans{}.attributes{}.value.stringValue and one with out. With | search resourceSpans{}.scopeSpans{}.spans{}.attributes{}.value.stringValue = "CONSO_ABAQ | 31/03/2016 | 21" {"resourceSpans":[{"resource":{"attributes":[{"key":"telemetry.sdk.language","value":{"stringValue":"cpp"}},{"key":"service.name","value":{"stringValue":"MXMARKETRISK.ENGINE.MX"}},{"key":"service.namespace","value":{"stringValue":"MXMARKETRISK.SERVICE"}},{"key":"process.pid","value":{"intValue":"604252"}},{"key":"service.instance.id","value":{"stringValue":"003nhhkz"}},{"key":"telemetry.sdk.name","value":{"stringValue":"opentelemetry"}},{"key":"telemetry.sdk.version","value":{"stringValue":"1.12.0"}},{"key":"mx.env","value":{"stringValue":"dell945srv:13003"}}]},"scopeSpans":[{"scope":{"name":"murex::observability_otel_backend::tracing","version":"v1"},"spans":[{"traceId":"10731f4b1d19380ceb33ae33672dbd5f","spanId":"0392a58e2dfaaa4c","parentSpanId":"ebce3b37999c2ea1","name":"scenario_reaction","kind":1,"startTimeUnixNano":"1747148775503846985","endTimeUnixNano":"1747148782361058175","attributes":[{"key":"market_risk_span","value":{"stringValue":"true"}}],"status":{}},{"traceId":"10731f4b1d19380ceb33ae33672dbd5f","spanId":"ebce3b37999c2ea1","parentSpanId":"825cbaedeb509365","name":"scenario_apply","kind":1,"startTimeUnixNano":"1747148775477950524","endTimeUnixNano":"1747148782362084106","attributes":[{"key":"market_risk_span","value":{"stringValue":"true"}},{"key":"mr_scenario","value":{"stringValue":"10"}}],"status":{}},{"traceId":"10731f4b1d19380ceb33ae33672dbd5f","spanId":"04307bd9c64e20e8","parentSpanId":"825cbaedeb509365","name":"structured_position_evaluation","kind":1,"startTimeUnixNano":"1747148782362177082","endTimeUnixNano":"1747148782379867824","attributes":[{"key":"market_risk_span","value":{"stringValue":"true"}},{"key":"mr_scenario","value":{"stringValue":"10"}}],"status":{}},{"traceId":"10731f4b1d19380ceb33ae33672dbd5f","spanId":"d2abbf63ac87acb4","parentSpanId":"825cbaedeb509365","name":"position_evaluation","kind":1,"startTimeUnixNano":"1747148782380422079","endTimeUnixNano":"1747148782509071609","attributes":[{"key":"market_risk_span","value":{"stringValue":"true"}},{"key":"mr_scenario","value":{"stringValue":"10"}}],"status":{}},{"traceId":"10731f4b1d19380ceb33ae33672dbd5f","spanId":"cc80374715a2e755","parentSpanId":"431a2a6341ac4120","name":"scenario_reaction","kind":1,"startTimeUnixNano":"1747148782510724546","endTimeUnixNano":"1747148782599301641","attributes":[{"key":"market_risk_span","value":{"stringValue":"true"}}],"status":{}},{"traceId":"10731f4b1d19380ceb33ae33672dbd5f","spanId":"431a2a6341ac4120","parentSpanId":"825cbaedeb509365","name":"scenario_restore","kind":1,"startTimeUnixNano":"1747148782509167483","endTimeUnixNano":"1747148782605479850","attributes":[{"key":"market_risk_span","value":{"stringValue":"true"}},{"key":"mr_scenario","value":{"stringValue":"10"}}],"status":{}},{"traceId":"10731f4b1d19380ceb33ae33672dbd5f","spanId":"825cbaedeb509365","parentSpanId":"8e2a92d0a40f203b","name":"scenario_all_apply","kind":1,"startTimeUnixNano":"1747148711449995255","endTimeUnixNano":"1747148782623591981","attributes":[{"key":"market_risk_span","value":{"stringValue":"true"}},{"key":"mr_scenario_nb","value":{"stringValue":"10"}}],"status":{}},{"traceId":"10731f4b1d19380ceb33ae33672dbd5f","spanId":"8e2a92d0a40f203b","parentSpanId":"8e5190bbe86bdaff","name":"fullreval_task","kind":1,"startTimeUnixNano":"1747148638253233246","endTimeUnixNano":"1747148782639890403","attributes":[{"key":"market_risk_span","value":{"stringValue":"true"}},{"key":"mr_batchId","value":{"stringValue":"40"}},{"key":"mr_batchType","value":{"stringValue":"Full Revaluation"}},{"key":"mr_bucketName","value":{"stringValue":""}},{"key":"mr_jobDomain","value":{"stringValue":"Market Risk"}},{"key":"mr_jobId","value":{"stringValue":"CONSO_ABAQ | 31/03/2016 | 21"}},{"key":"mr_strategy","value":{"stringValue":"typo_Callable Bond"}},{"key":"mr_uuid","value":{"stringValue":"7dcdf03d-9dd0-42f6-b0f4-e2508283ff44"}},{"key":"mrb_batch_affinity","value":{"stringValue":"CONSO_ABAQ_run_Batch|CONSO_ABAQ|2016/03/31|21_FullReval0_00040"}},{"key":"mr_batch_compute_cpu_time","value":{"doubleValue":71.000477}},{"key":"mr_batch_compute_time","value":{"doubleValue":71.351}},{"key":"mr_batch_load_cpu_time","value":{"doubleValue":61.569109000000005}},{"key":"mr_batch_load_time","value":{"doubleValue":71.597}},{"key":"mr_batch_status","value":{"stringValue":"WARNING"}},{"key":"mr_batch_total_cpu_time","value":{"doubleValue":133.924506}},{"key":"mr_batch_total_time","value":{"doubleValue":144.375}}],"status":{}}]}]}]} With | search resourceSpans{}.scopeSpans{}.spans{}.attributes{}.value.stringValue = * (But I understand your great idea, that we will just put in "|search *") {"resourceSpans":[{"resource":{"attributes":[{"key":"process.pid","value":{"intValue":"600146"}},{"key":"service.instance.id","value":{"stringValue":"003nhhk3"}},{"key":"service.name","value":{"stringValue":"LAUNCHERMXMARKETRISK_MPC"}},{"key":"service.namespace","value":{"stringValue":"LAUNCHER"}},{"key":"telemetry.sdk.language","value":{"stringValue":"java"}},{"key":"telemetry.sdk.name","value":{"stringValue":"opentelemetry"}},{"key":"telemetry.sdk.version","value":{"stringValue":"1.34.0"}},{"key":"mx.env","value":{"stringValue":"dell945srv:13003"}}]},"scopeSpans":[{"scope":{"name":"mx-traces-api","version":"1.0.0"},"spans":[{"traceId":"10731f4b1d19380ceb33ae33672dbd5f","spanId":"cbf88ed07b403b48","parentSpanId":"3cfc7d85786b676b","name":"createSubmission","kind":1,"startTimeUnixNano":"1747152946314481406","endTimeUnixNano":"1747152946314775297","status":{}},{"traceId":"10731f4b1d19380ceb33ae33672dbd5f","spanId":"8ff7fabcab4b12d0","parentSpanId":"3cfc7d85786b676b","name":"createSubmission","kind":1,"startTimeUnixNano":"1747152946353054099","endTimeUnixNano":"1747152946353187644","status":{}},{"traceId":"10731f4b1d19380ceb33ae33672dbd5f","spanId":"4b14e49df1e1ffd8","parentSpanId":"3cfc7d85786b676b","name":"createSubmission","kind":1,"startTimeUnixNano":"1747152946474942393","endTimeUnixNano":"1747152946475042609","status":{}},{"traceId":"10731f4b1d19380ceb33ae33672dbd5f","spanId":"169b89bf118931d8","parentSpanId":"3cfc7d85786b676b","name":"createSubmission","kind":1,"startTimeUnixNano":"1747152946488875310","endTimeUnixNano":"1747152946488933120","status":{}}]}]}]}  
Hi @zapping575  Nothing particular is jumping out at me, in your outputs.conf for UF, the sslRootCAPath is deprecated and instead would use 'server.conf/[sslConfig]/sslRootCAPath' but you have set t... See more...
Hi @zapping575  Nothing particular is jumping out at me, in your outputs.conf for UF, the sslRootCAPath is deprecated and instead would use 'server.conf/[sslConfig]/sslRootCAPath' but you have set this too. I would try the following: Check the certs can be read okay: openssl x509 -in /path/to/certificate.crt -text -noout Check which CA's the indexer will accept certs from: Accepted CAs are displayed under the "Acceptable client certificate CA names" heading. openssl s_client -connect <indexer_host>:9997 -showcerts -tls1_2   Try and connect to your indexer from UF using the certs with openssl: openssl s_client \ -connect <indexer_host>:9997 \ -CAfile /mnt/certs/cert.pem \ -cert /mnt/certs/cert0.pem \ -key /mnt/certs/cert0.pem \ -tls1_2 \ -state -showcerts  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @braxton839  The config you have looks good, but I think the issue here is that the sourcetype you are referencing (juniper:junos:firewall:structured) is actually being set during the parsing pro... See more...
Hi @braxton839  The config you have looks good, but I think the issue here is that the sourcetype you are referencing (juniper:junos:firewall:structured) is actually being set during the parsing process and therefore wouldnt be picked up by your custom props/transforms. This is overwriting the existing sourcetype by the transform "force_sourcetype_for_junos_firewall_structured". Instead you could try the following: == props.conf == [source::....junos_fw] TRANSFORMS-null= setnull [juniper] TRANSFORMS-null= setnull == transforms.conf == # Filter juniper teardown logs to nullqueue [setnull] REGEX = RT_FLOW_SESSION_CLOSE DEST_KEY = queue FORMAT = nullQueue    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
You can have only one sslRootCAPath in your node. Just put all different ca certs into this one file.
Even you have defined several pipelines there are only one input pipeline for input processors. See https://docs.splunk.com/Documentation/Splunk/9.4.2/Indexer/Pipelinesets that could lead you a situa... See more...
Even you have defined several pipelines there are only one input pipeline for input processors. See https://docs.splunk.com/Documentation/Splunk/9.4.2/Indexer/Pipelinesets that could lead you a situation what you described.
Hi @Ramachandran  Are you running ES and SOAR On-Premise? Please could you confirm the versions you are using. You mentioned that connectivity is in place between the installations - please could y... See more...
Hi @Ramachandran  Are you running ES and SOAR On-Premise? Please could you confirm the versions you are using. You mentioned that connectivity is in place between the installations - please could you verify that the correct "Allowed IP" was used when setting up your service user (https://docs.splunk.com/Documentation/SOARonprem/latest/Admin/Users#:~:text=Create%20an%20automation%20user%20in%20Splunk%20SOAR%20(On%2Dpremises)) Failing this you might want to look at temporarily disabling SSL validation to rule out SSL issues (https://docs.splunk.com/Documentation/PhantomApp/4.0.10/Install/ConfigureCerts#:~:text=Admin%20Manual.-,Manage%20HTTPS%20certificate%20validation%20using%20the%20REST%20API,-In%20Splunk%20Enterprise) - Note: Even though you see a 500 error, this isnt necessarily a 500 code from SOAR (which would imply SSL is fine), its from the API endpoint in Splunk which reaches out to SOAR.   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi splunkers.    I would like to understand a tricky point.   I'm using a distributed environment with 2 intermediate universal forwarders. They have to deal with 1.2 TB of data per day. 1 - Str... See more...
Hi splunkers.    I would like to understand a tricky point.   I'm using a distributed environment with 2 intermediate universal forwarders. They have to deal with 1.2 TB of data per day. 1 - Strangely, these UF have their parsing queues used (TOP 1 of the queues usage !) and these forwarders are UF !!!   2 - These UF have 4 pipeline. If one of these pipeline parsing queue is full, the entire UF refuse connection from upstream forwarders.   There queues size where increased to 1GB (input / parsing / output ...). But sometimes, this situation comes back.   Have you got any idea what could hapening ?
I recently started using the HEC with TLS on my standalone testing instance and now I am seeing some behavior that I cannot make sense of. I assume that it is related to the fact that I configured b... See more...
I recently started using the HEC with TLS on my standalone testing instance and now I am seeing some behavior that I cannot make sense of. I assume that it is related to the fact that I configured both, TCP Input and HEC Input to use different certificates. The HEC Input is working fine, but when a UF tries to connect to the TCP Input, I get this error: 05-22-2025 07:39:18.469 +0000 ERROR TcpInputProc [2339416 FwdDataReceiverThread] - Error encountered for connection from src=REDACTED:31261. error:14089086:SSL routines:ssl3_get_client_certificate:certificate verify failed - please check the output of the `openssl verify` command for the certificates involved; note that if certificate verification is enabled (requireClientCert or sslVerifyServerCert set to "true"), the CA certificate and the server certificate should not have the same Common Name. 05-22-2025 07:39:18.555 +0000 ERROR X509Verify [2339416 FwdDataReceiverThread] - Client X509 certificate (CN=REDACTED,CN=A,OU=B,DC=C,DC=D,DC=E) failed validation; error=19, reason="self signed certificate in certificate chain" 05-22-2025 07:39:18.555 +0000 WARN SSLCommon [2339416 FwdDataReceiverThread] - Received fatal SSL3 alert. ssl_state='error', alert_description='unknown CA'. 05-22-2025 07:39:18.555 +0000 ERROR TcpInputProc [2339416 FwdDataReceiverThread] - Error encountered for connection from src=10.253.192.20:32991. error:14089086:SSL routines:ssl3_get_client_certificate:certificate verify failed - please check the output of the `openssl verify` command for the certificates involved; note that if certificate verification is enabled (requireClientCert or sslVerifyServerCert set to "true"), the CA certificate and the server certificate should not have the same Common Name. On the UF, I can see the following error message: 05-22-2025 07:39:17.953 +0000 WARN SSLCommon [1074 TcpOutEloop] - Received fatal SSL3 alert. ssl_state='SSLv3 read server session ticket A', alert_description='unknown CA'. 05-22-2025 07:39:17.953 +0000 ERROR TcpOutputFd [1074 TcpOutEloop] - Connection to host=REDACTED:9997 failed Below are my config files. I appreciate any pointers as to what  I did wrong. Note: All files which are storing certificates are the "usual" order: For clientCert and serverCert First certificate, then private key For sslRootCAPath First issuing, then Root CA Standalone/Indexer: Server.conf [sslConfig] sslRootCAPath = /opt/splunk/etc/auth/mycerts/cert.pem Inputs.conf [splunktcp-ssl:9997] disabled=0 [SSL] serverCert = /opt/splunk/etc/auth/mycerts/cert0.pem sslPassword = REDACTED requireClientCert = true sslVersions = tls1.2 [http] disabled = 0 enableSSL = 1 serverCert = /opt/splunk/etc/auth/mycerts/cert1.pem sslPassword = REDACTED [http://whatthehec] disabled = 0 token = REDACTED UF: server.conf [sslConfig] serverCert = /mnt/certs/cert0.pem sslPassword = REDACTED sslRootCAPath = /mnt/certs/cert.pem sslVersions = tls1.2 outputs.conf: [tcpout] defaultGroup = def forwardedindex.2.whitelist = (_audit|_introspection|_internal) [tcpout:def] useACK = true server = server:9997 autoLBFrequency = 180 forceTimebasedAutoLB = false autoLBVolume = 5000000 maxQueueSize =100MB connectionTTL = 300 heartbeatFrequency = 350 writeTimeout = 300 sslVersions = tls1.2 clientCert = /mnt/certs/cert0.pem sslRootCAPath = /mnt/certs/cert.pem sslPassword = REDACTED sslVerifyServerCert = true  
Hi @jagan_jijo  Please could you provide a little more information on your usecases here and what kind of data you are looking to extract from Splunk?  You can download data using the search REST A... See more...
Hi @jagan_jijo  Please could you provide a little more information on your usecases here and what kind of data you are looking to extract from Splunk?  You can download data using the search REST API - Check out the following page on how to execute searches using the REST API: https://docs.splunk.com/Documentation/Splunk/9.4.2/RESTTUT/RESTsearches Regarding pulling data on specific incidents, are you using IT Service Intelligence (ITSI) or Enterprise Security (ES) which has your incidents collated? There are specific endpoints for these premium apps to provide things like incidents/notable events etc depending on your use-case.  Regarding webhooks, the native webhook sending is quite limited (see https://docs.splunk.com/Documentation/Splunk/9.4.0/Alert/Webhooks) - I'd usually recommend looking at Better Webhooks on SplunkBase, is there a particular problem you're having with that app?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Greetings, I have been reading through documentation and responses on here about filtering out specific events at the heavy forwarder (trying to reduce our daily ingest). In the local folder for... See more...
Greetings, I have been reading through documentation and responses on here about filtering out specific events at the heavy forwarder (trying to reduce our daily ingest). In the local folder for our Splunk_TA_juniper app I have created a props.conf and a transforms.conf and set owner/permissions to match other .conf files. props.conf: # Filter teardown events from Juniper syslogs into the nullqueue [juniper:junos:firewall:structured] TRANSFORMS-null= setnull transforms.conf # Filter juniper teardown logs to nullqueue [setnull] REGEX = RT_FLOW_SESSION_CLOSE DEST_KEY = queue FORMAT = nullQueue I restarted the Splunk service... but I'm still getting these events. Not sure what I did wrong. I pulled some raw event text and tested the regex in PowerShell (worked with -match). Any help would be greatly appreciated!
Hi @Harikiranjammul  You could use the following SPL to achieve this: | makeresults | eval ip="0.0.0.11,0.0.0.12" | makemv ip delim="," | mvexpand ip ``` End of sample data ``` | lookup your_looku... See more...
Hi @Harikiranjammul  You could use the following SPL to achieve this: | makeresults | eval ip="0.0.0.11,0.0.0.12" | makemv ip delim="," | mvexpand ip ``` End of sample data ``` | lookup your_lookup_file ipaddress as ip OUTPUTNEW ipaddress as found_ip | eval ip=if(isnull(found_ip), "0.0.0.0", ip)  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello All, I have a question which I am not able to find an answer for. Hence looking for ideas, suggestions etc from fellow community members. We use Splunk enterprise security in our organization... See more...
Hello All, I have a question which I am not able to find an answer for. Hence looking for ideas, suggestions etc from fellow community members. We use Splunk enterprise security in our organization and I am trying to build correlation search for generating a finding (or intermediate finding) in Mission Control based on Microsoft defender incidents. I am sure that you would know, Microsoft defender incident is a combination of different alerts and it can include multiple entities. I have a search which gives me all the details but I am struggling to auto populate the identities data from Splunk identities lookup. Sample data below. My question are: how can I enrich the data for identities in the incident with Splunk ES identities data. Is it not the right way to create this search? My objective is to have a finding in Splunk ES if defender generates any incident.  Assuming this works somehow, how can I create the drill down searches so that it gives soc the ability to see supporting data (such as signin logs for a user (say user1)) as this is a multi value field. Should I use Defender alerts (as opposed to incident) to create a intermediate finding and then let Splunk run the Risk based rules to trigger if an finding based on this? alerts can have the multi entities (users, Ips, devices etc) as well so might end up with similar issues again.  Any other suggestions which others would have implemented? incidentId incidentUri incidentName alertId(s) alerts_count category createdTime identities identities_count serviceSource(s) 123456 https://security.microsoft.com/incidents/123456?tid=XXXXXXX Email reported by user as malware or phish involving multiple users 1a2b3c4d 1 InitialAccess 2025-05-08T09:43:20.95Z ip1 user1 user2 user3 mailbox1 6 MicrosoftDefenderForOffice   Thanks