All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @jagan_jijo  Both ES and ITSI have their own use-cases and strengths. They can also exist together in the same Splunk deployment but ultimately ITSI is used for IT Operations Monitoring (e.g. ale... See more...
Hi @jagan_jijo  Both ES and ITSI have their own use-cases and strengths. They can also exist together in the same Splunk deployment but ultimately ITSI is used for IT Operations Monitoring (e.g. alerting based on availability of services, Key Performance Indicators etc - whereas ES is all about Security Monitoring. If you're looking at pulling ES incidents then there is an additional set of APIs that you can make use of (see https://docs.splunk.com/Documentation/ES/8.0.40/API/AboutSplunkESAPI)  What is the system you are looking to integrate with here?  The Better Webhooks is just a free app which can be installed within your Splunk environment, just like a custom webhook app would, however there isnt anything stopping you from building your own Splunk alert action custom app to do the same thing if you dont want to use the community-built app. https://dev.splunk.com/enterprise/docs/devtools/customalertactions/ is a good starting point for building a custom alert action - which has a Slack alert example that you might be able to modify. Alternatively you could download the Better Webhook app to see how that is coded and build as required.  Just for clarity, the Better Webhook app would be as "native" within Splunk as a custom webhook app would be, both would tie in to the alert action framework, it isnt something you have to host separately.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @zksvc  Further to my other reply, have you been through this process of configuring a service account between UBA/ES? https://docs.splunk.com/Documentation/UBA/5.4.2/Integration/SendIRdatatoES ... See more...
Hi @zksvc  Further to my other reply, have you been through this process of configuring a service account between UBA/ES? https://docs.splunk.com/Documentation/UBA/5.4.2/Integration/SendIRdatatoES  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @zksvc  It might be worth reviewing the _internal logs in Splunk to see which page is throwing the Unauthorized - I would have thought it would be HEC but you said you have already checked that? ... See more...
Hi @zksvc  It might be worth reviewing the _internal logs in Splunk to see which page is throwing the Unauthorized - I would have thought it would be HEC but you said you have already checked that?  It might be worth double checking with a CURL command such as: curl https://<splunkServer>:8088/services/collector/health?token=<tokenFrom_uba-site.properties> If you run that from your UBA host it would validate that it can reach HEC with the token You should get  {"text":"HEC is healthy","code":17} Does anything appear in _internal? index=_internal status=401 OR "Unauthorized"  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@livehybrid Thanks for the response! We're fairly new to Splunk and currently exploring Enterprise Security (ES) as our primary platform. what do you recommend as the industry standard? ES or ITSI? ... See more...
@livehybrid Thanks for the response! We're fairly new to Splunk and currently exploring Enterprise Security (ES) as our primary platform. what do you recommend as the industry standard? ES or ITSI? We’ve already reviewed the REST API documentation for retrieving fired alerts, search jobs, and events, and just wanted to double-check if that’s the recommended approach for pulling incident data during specific timeframes. Our main goal is to retrieve all incidents that occurred within a defined window and then collect the associated raw events for those incidents. We’re also exploring the use of HTTP notifications to reduce the number of API queries—ideally by triggering event collection based on incoming alerts. Regarding Better Webhooks, we’ve looked into it and it seems like a great solution. However, we’re hoping to build something similar natively within Splunk. Do you have any recommendations on how to approach building a custom webhook app or alert action? Also, is there a way to test such an app effectively within Splunk?
Hi Everyone,  I encountered an error in UBA, specifically related to the 'caspida-outputconnector'. While the issue can be resolved by restarting UBA, I would like to understand the root cause. I ha... See more...
Hi Everyone,  I encountered an error in UBA, specifically related to the 'caspida-outputconnector'. While the issue can be resolved by restarting UBA, I would like to understand the root cause. I have already reviewed the configuration file at '/etc/caspida/local/conf/uba-site.properties' and confirmed that everything appears to be correct. I have also tested the HEC token, and it is functioning properly. Does anyone have experience or guidance on how to troubleshoot and identify the root cause of this issue?    
Let me simplify your problem statement by eliminating JSON path from the equation.  The requirements are simply these: In a dashboard, there is a dropdown input token, say SomeToken. SomeToken has... See more...
Let me simplify your problem statement by eliminating JSON path from the equation.  The requirements are simply these: In a dashboard, there is a dropdown input token, say SomeToken. SomeToken has a fixed, predefined entry with label "All". The rest of choices for SomeToken are populated by a search.  I will call this search <tokenSearch>. Events in dashboard panel may or may not contain a field of interest named SomeField. If the user selects "All" (predefined, fixed value), all events should be returned regardless of SomeField. If the user selects any other value populated by <tokenSearch>, only events with SomeField = SomeToken should be returned. (In your case, SomeField is resourceSpans{}.scopeSpans{}.spans{}.attributes{}.value.stringValue, and you call SomeToken Token_Mr_jobId.) @livehybrid already gives the solution: Do not return only SomeFieldValue in <tokenSearch> and use the value to populate both input label and input value.  Use a different strategy in <tokenSearch>, i.e., return SomeFieldValue as input label, and "SomeField=SomeFieldValue" as input value. <fieldForLabel>SomeFieldValue</fieldForLabel> <fieldForValue>SomeField=SomeFieldValue</fieldForValue>​ Then, in your panel search, do not use "SomeField = $SomeToken$".  Instead, simply insert $SomeToken$ as a search term. One more suggestion: Do not use a pipe between your index search and the tokenized filter if SomeField is already extracted at search time.  This unnecessarily burdens Splunk. In the following demo dashboard, SomeField is substituted with thread_name from index _internal; thread_name_tok is SomeToken.  The key here is <tokenSearch>: index=_internal component=* | stats values(thread_name) as token_label | mvexpand token_label | eval token_value = "thread_name=" . token_label This search differs from yours in one critical step: the last eval sets token_value to a search term involving field name thread_name, not a simple value of this field.  Then, token_label and token_value are used to populate input label and value, respectively.  In this example, I set "All" label to a zero-length character as value, which is equivalent to * in search command but more economical. Full demo dashboard as follows.  Play with it and fit it into your dataset. <form version="1.1" theme="light"> <label>Search for a path the might not exist</label> <description>https://community.splunk.com/t5/Splunk-Search/Search-for-a-path-the-might-not-exist/m-p/746683#M241692</description> <fieldset submitButton="false"> <input type="dropdown" token="thread_name_tok" searchWhenChanged="true"> <label>Select thread_name</label> <choice value="">All events</choice> <default></default> <fieldForLabel>token_label</fieldForLabel> <fieldForValue>token_value</fieldForValue> <search> <query>index=_internal component=* | stats values(thread_name) as token_label | mvexpand token_label | eval token_value = "thread_name=" . token_label</query> <earliest>-15m</earliest> <latest>now</latest> </search> </input> </fieldset> <row> <panel> <title>Token value of your selection: &gt;$thread_name_tok$&lt;</title> <event> <search> <query>index=_internal component=* $thread_name_tok$</query> <earliest>-15m</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> </event> </panel> </row> </form> Hope this helps.
Thank you for the replies, both of which have been very helpful in resolving this issue. Cleaning up the sslRootCAPath settings on the UF is already a good thing by itself. Investigating the TLS ne... See more...
Thank you for the replies, both of which have been very helpful in resolving this issue. Cleaning up the sslRootCAPath settings on the UF is already a good thing by itself. Investigating the TLS negotiation ultimately lead me to realize that on the indexer, etc/system/local/server.conf did not exist. In the splunk 9.2.5 docker image, the default.yml file did apparently not get processed by ansible. All the other config files (web.conf, authorize.conf) were also nonexistent. The fact that there was not rootCACert stored on the indexer explains why the log message states "unknown CA"
It's because VirusTotal version, all is good after i change to VirusTotalV3   
Hi i want create simple playbook to detect data from Incident Response it can send to SOAR to automate analyze like Virustotal. I just want VirusTotal to analyze it and write the result in comment an... See more...
Hi i want create simple playbook to detect data from Incident Response it can send to SOAR to automate analyze like Virustotal. I just want VirusTotal to analyze it and write the result in comment and with status "In Progress" or "Pending" i SS the flow and i think it very possible. but i got confused error "The supplied status is invalid"  Also here my python sourcecode  """ """ import phantom.rules as phantom import json from datetime import datetime, timedelta @phantom.playbook_block() def on_start(container): phantom.debug('on_start() called') # call 'update_event_1' block update_event_1(container=container) return @phantom.playbook_block() def update_event_1(action=None, success=None, container=None, results=None, handle=None, filtered_artifacts=None, filtered_results=None, custom_function=None, loop_state_json=None, **kwargs): phantom.debug("update_event_1() called") # phantom.debug('Action: {0} {1}'.format(action['name'], ('SUCCEEDED' if success else 'FAILED'))) container_artifact_data = phantom.collect2(container=container, datapath=["artifact:*.cef.event_id","artifact:*.id"]) parameters = [] # build parameters list for 'update_event_1' call for container_artifact_item in container_artifact_data: if container_artifact_item[0] is not None: parameters.append({ "status": "in progress", "comment": "Tahap analisa via SOAR", "event_ids": container_artifact_item[0], "context": {'artifact_id': container_artifact_item[1]}, }) ################################################################################ ## Custom Code Start ################################################################################ # Write your custom code here... ################################################################################ ## Custom Code End ################################################################################ phantom.act("update event", parameters=parameters, name="update_event_1", assets=["soar_es"], callback=ip_reputation_1) return @phantom.playbook_block() def ip_reputation_1(action=None, success=None, container=None, results=None, handle=None, filtered_artifacts=None, filtered_results=None, custom_function=None, loop_state_json=None, **kwargs): phantom.debug("ip_reputation_1() called") # phantom.debug('Action: {0} {1}'.format(action['name'], ('SUCCEEDED' if success else 'FAILED'))) container_artifact_data = phantom.collect2(container=container, datapath=["artifact:*.cef.src","artifact:*.id"]) parameters = [] # build parameters list for 'ip_reputation_1' call for container_artifact_item in container_artifact_data: if container_artifact_item[0] is not None: parameters.append({ "ip": container_artifact_item[0], "context": {'artifact_id': container_artifact_item[1]}, }) ################################################################################ ## Custom Code Start ################################################################################ # Write your custom code here... ################################################################################ ## Custom Code End ################################################################################ phantom.act("ip reputation", parameters=parameters, name="ip_reputation_1", assets=["virtotv3-trialzake"], callback=decision_1) return @phantom.playbook_block() def decision_1(action=None, success=None, container=None, results=None, handle=None, filtered_artifacts=None, filtered_results=None, custom_function=None, loop_state_json=None, **kwargs): phantom.debug("decision_1() called") # check for 'if' condition 1 found_match_1 = phantom.decision( container=container, conditions=[ ["ip_reputation_1:action_result.summary.malicious", ">", 0] ], delimiter=None) # call connected blocks if condition 1 matched if found_match_1: update_event_2(action=action, success=success, container=container, results=results, handle=handle) return # check for 'else' condition 2 update_event_3(action=action, success=success, container=container, results=results, handle=handle) return @phantom.playbook_block() def update_event_2(action=None, success=None, container=None, results=None, handle=None, filtered_artifacts=None, filtered_results=None, custom_function=None, loop_state_json=None, **kwargs): phantom.debug("update_event_2() called") # phantom.debug('Action: {0} {1}'.format(action['name'], ('SUCCEEDED' if success else 'FAILED'))) comment_formatted_string = phantom.format( container=container, template="""Information from SOAR : \nSource : {0}\nHarmles : {1} \nMalicious : {2}""", parameters=[ "ip_reputation_1:action_result.data.*.attributes.crowdsourced_context.*.source", "ip_reputation_1:action_result.data.*.attributes.last_analysis_stats.harmless", "ip_reputation_1:action_result.data.*.attributes.last_analysis_stats.malicious" ]) container_artifact_data = phantom.collect2(container=container, datapath=["artifact:*.cef.event_id","artifact:*.id"]) ip_reputation_1_result_data = phantom.collect2(container=container, datapath=["ip_reputation_1:action_result.data.*.attributes.crowdsourced_context.*.source","ip_reputation_1:action_result.data.*.attributes.last_analysis_stats.harmless","ip_reputation_1:action_result.data.*.attributes.last_analysis_stats.malicious","ip_reputation_1:action_result.parameter.context.artifact_id"], action_results=results) parameters = [] # build parameters list for 'update_event_2' call for container_artifact_item in container_artifact_data: for ip_reputation_1_result_item in ip_reputation_1_result_data: if container_artifact_item[0] is not None: parameters.append({ "event_ids": container_artifact_item[0], "status": "Pending", "comment": comment_formatted_string, "context": {'artifact_id': ip_reputation_1_result_item[3]}, }) ################################################################################ ## Custom Code Start ################################################################################ # Write your custom code here... ################################################################################ ## Custom Code End ################################################################################ phantom.act("update event", parameters=parameters, name="update_event_2", assets=["soar_es"]) return @phantom.playbook_block() def lookup_ip_1(action=None, success=None, container=None, results=None, handle=None, filtered_artifacts=None, filtered_results=None, custom_function=None, loop_state_json=None, **kwargs): phantom.debug("lookup_ip_1() called") # phantom.debug('Action: {0} {1}'.format(action['name'], ('SUCCEEDED' if success else 'FAILED'))) container_artifact_data = phantom.collect2(container=container, datapath=["artifact:*.cef.src","artifact:*.id"]) parameters = [] # build parameters list for 'lookup_ip_1' call for container_artifact_item in container_artifact_data: if container_artifact_item[0] is not None: parameters.append({ "days": 10, "ip": container_artifact_item[0], "context": {'artifact_id': container_artifact_item[1]}, }) ################################################################################ ## Custom Code Start ################################################################################ # Write your custom code here... ################################################################################ ## Custom Code End ################################################################################ phantom.act("lookup ip", parameters=parameters, name="lookup_ip_1", assets=["abuseipdb"]) return @phantom.playbook_block() def format_1(action=None, success=None, container=None, results=None, handle=None, filtered_artifacts=None, filtered_results=None, custom_function=None, loop_state_json=None, **kwargs): phantom.debug("format_1() called") template = """Detail : {0}\nSeverity : {1}\nSource : {2}\nHarmles : {3}\nMalicious : {4}\n""" # parameter list for template variable replacement parameters = [ "ip_reputation_1:action_result.data.*.attributes.crowdsourced_context.*.detail", "ip_reputation_1:action_result.data.*.attributes.crowdsourced_context.*.severity", "ip_reputation_1:action_result.data.*.attributes.crowdsourced_context.*.source", "ip_reputation_1:action_result.data.*.attributes.last_analysis_stats.harmless", "ip_reputation_1:action_result.data.*.attributes.last_analysis_stats.malicious" ] ################################################################################ ## Custom Code Start ################################################################################ # Write your custom code here... ################################################################################ ## Custom Code End ################################################################################ phantom.format(container=container, template=template, parameters=parameters, name="format_1") return @phantom.playbook_block() def update_event_3(action=None, success=None, container=None, results=None, handle=None, filtered_artifacts=None, filtered_results=None, custom_function=None, loop_state_json=None, **kwargs): phantom.debug("update_event_3() called") # phantom.debug('Action: {0} {1}'.format(action['name'], ('SUCCEEDED' if success else 'FAILED'))) container_artifact_data = phantom.collect2(container=container, datapath=["artifact:*.cef.event_id","artifact:*.id"]) parameters = [] # build parameters list for 'update_event_3' call for container_artifact_item in container_artifact_data: if container_artifact_item[0] is not None: parameters.append({ "event_ids": container_artifact_item[0], "status": "Pending", "comment": "Safe from Virus Total", "context": {'artifact_id': container_artifact_item[1]}, }) ################################################################################ ## Custom Code Start ################################################################################ # Write your custom code here... ################################################################################ ## Custom Code End ################################################################################ phantom.act("update event", parameters=parameters, name="update_event_3", assets=["soar_es"]) return @phantom.playbook_block() def on_finish(container, summary): phantom.debug("on_finish() called") ################################################################################ ## Custom Code Start ################################################################################ # Write your custom code here... ################################################################################ ## Custom Code End ################################################################################ return      
Hi @cherrypick  This sounds like a job for INGEST_EVAL - There are great examples at https://github.com/silkyrich/ingest_eval_examples/blob/master/default/transforms.conf from Rich Morgan @ Splunk. ... See more...
Hi @cherrypick  This sounds like a job for INGEST_EVAL - There are great examples at https://github.com/silkyrich/ingest_eval_examples/blob/master/default/transforms.conf from Rich Morgan @ Splunk. However for your specific example, the following config should hopefully work, this works by checking the time format first before then setting as required: == props.conf == [yourSourceType] TRANSFORMS-setCustomTime = setJSONTime == transform.conf == [setJSONTime] INGEST_EVAL = _time=if(match(date, "\d{4}-\d{2}-\d{2} \d{1,2}:\d{2}:\d{2} [AP]M"), strptime(date, "%Y-%m-%d %I:%M:%S %p"), strptime(date, "%Y-%m-%d"))    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
what do you mean by inputs.conf what should I configure in that file, can you please elaborate ?
Hi @Cheng2Ready , please try this: index=xxxxxx | eval HDate=strftime(_time,"%Y-%m-%d") | stats count BY HDate | eval type="events" | append [ | inputlookup Date_Test.csv | eval count=0, type="look... See more...
Hi @Cheng2Ready , please try this: index=xxxxxx | eval HDate=strftime(_time,"%Y-%m-%d") | stats count BY HDate | eval type="events" | append [ | inputlookup Date_Test.csv | eval count=0, type="lookup" | fields HDate count type ] | stats sum(count) AS total values(type) AS type dc(type) AS type_count BY HDate | where total=0 OR (total>1 AND type_count=1 AND type="events" in this way, with the first condition (total=0) you check if there's some date without events and with the scond one (total>1 AND type_count=1 AND type="events") you check that there are events with dates not present in the lookup. The solution has only one issue, that's inside the requirement: you need to continously update the lookup otherwise you'll have false positives created by the old dates in the lookup. Only for discussing: what do you want to check? maybe there's another easier solution. Ciao. Giuseppe
This is not a good use of inputlookup.  The better command to use is lookup.  You then count how many events do not match index=xxxxxx | eval HDate=strftime(_time,"%Y-%m-%d") | lookup Date_Test.csv ... See more...
This is not a good use of inputlookup.  The better command to use is lookup.  You then count how many events do not match index=xxxxxx | eval HDate=strftime(_time,"%Y-%m-%d") | lookup Date_Test.csv HDate output HDate as match | where isnull(match) | stats count values(Hdate) | where count != 1  I added values(Hdate) in speculation.  Don't include it in your alert if the values are not useful.
Hi everyone, I'm working on integrating Splunk Enterprise with Splunk SOAR using the Splunk App for SOAR Export, and I'm running into an issue where alerts sent from Splunk aren't appearing in SOAR.... See more...
Hi everyone, I'm working on integrating Splunk Enterprise with Splunk SOAR using the Splunk App for SOAR Export, and I'm running into an issue where alerts sent from Splunk aren't appearing in SOAR. Setup Details: Using App-to-App connection (not direct API/port 443) SOAR server is configured and marked active in the Splunk App for SOAR Export SOAR user has the observer and automation roles SSL verification is disabled (self-signed cert) Splunk and SOAR are on the same VPC/subnet with proper connectivity Test Alert Sent from Search & Reporting: | makeresults | eval foo="helloo" | eval src_ip="1.1.1.1" | table _time, foo, src_ip The Issue: No events are appearing in SOAR Nothing listed in Event Ingest Status or as Ad hoc search result No errors in the Splunk Job Inspector What I Need Help With: Are there any extra steps required in the new SOAR UI to allow data from Splunk’s App for SOAR Export? Any known limitations or misconfigurations I might be missing? Any guidance would be greatly appreciated! Thanks in advance.
Thanks for the links, I gona read them and check logs for output errors.
OK, this is the explanation of the connexions refused when one pipeline queue get blocked. Thanks, Now, I have to understand why i've got pipelines queues blocked.    
Hi @kakawun !  Whilst this issue is from a while ago, it may help other users. Just wanted to let you know this issue is resolved now in 9.3.0 and later releases! If any reference to this ... See more...
Hi @kakawun !  Whilst this issue is from a while ago, it may help other users. Just wanted to let you know this issue is resolved now in 9.3.0 and later releases! If any reference to this fix is needed with support, you can quote SPL-251796. Thanks! 
See this example dashboard - this uses a <change> block on the input to change the token <form version="1.1" theme="light"> <label>Backslash escaped input</label> <fieldset submitButton="false">... See more...
See this example dashboard - this uses a <change> block on the input to change the token <form version="1.1" theme="light"> <label>Backslash escaped input</label> <fieldset submitButton="false"> <input type="text" token="Get_Process_Path" searchWhenChanged="true"> <label>Enter Path</label> <prefix>process_path="*</prefix> <suffix>*"</suffix> <change> <eval token="escaped_path">replace($Get_Process_Path$, "\\\\", "\\\\")</eval> </change> </input> </fieldset> <row> <panel> <html>Token created from the user's input is <b style="color:blue">[$Get_Process_Path$]</b> and the up[dated search token applied is <b style="color:red">[$escaped_path$]</b></html> <table> <search> <query>index=_audit $escaped_path$</query> <earliest>-60m@m</earliest> <latest>now</latest> </search> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form>
@livehybrid  Thank you so much for the feedback as to answering your question "Although I'm confused as to why you couldn't do this? index=xxxxxx | eval HDate=strftime(_time,"%Y-%m-%d") | sea... See more...
@livehybrid  Thank you so much for the feedback as to answering your question "Although I'm confused as to why you couldn't do this? index=xxxxxx | eval HDate=strftime(_time,"%Y-%m-%d") | search NOT [ | inputlookup Date_Test.csv | fields HDate ] | stats count | where count>0" Would this also help capture if there was 0 events?  The Goal is to have the Alert Trigger anything except 1 event , so !=1  . It needs to alert if  there is 0 events found OR more than 1 event. Either way I have a scenario where there is 0 events BUT! Its a Mute date on my Lookup table and it still fired an alert. Its either that or because its was a Mute date that there might have been 1 event but since its a Mute date it changed it to 0 event Still causing the Alert to fire. Let me know if you need more clarification and I can post what I have setup
The OS in your first result has OS has "Microsoft Windows 11 Enterprise", whereas your OperatingSystems field in your OS_Outdated.csv lookup does not appear to have "Microsoft" in the name, so natura... See more...
The OS in your first result has OS has "Microsoft Windows 11 Enterprise", whereas your OperatingSystems field in your OS_Outdated.csv lookup does not appear to have "Microsoft" in the name, so naturally it will not match. You will either have to make your OperatingSystems field a wildcarded lookup or massage your data so the two fields contain similar data. You also have a small issue with your use of fillnull - you specify a field name "outdated" which is lower case, whereas your field from the lookup is Outdated (capital O) You can try this search index=endpoint_defender source="AdvancedHunting-DeviceInfo" | rex field=DeviceName "(?<DeviceName>\w{3}-\w{1,})." | eval DeviceName=upper(DeviceName) | lookup snow_os.csv DeviceName output OS BuildNumber Version ``` Remove the word Microsoft and any following spaces ``` | eval OperatinsSystems=replace(OS, "Microsoft\s*", "") ``` Now use this modified field as the lookup field ``` | lookup OS_Outdated.csv OperatingSystems BuildNumber Version OUTPUT Outdated | fillnull value=false Outdated | table DeviceName OS BuildNumber Version Outdated