All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi i want create simple playbook to detect data from Incident Response it can send to SOAR to automate analyze like Virustotal. I just want VirusTotal to analyze it and write the result in comment an... See more...
Hi i want create simple playbook to detect data from Incident Response it can send to SOAR to automate analyze like Virustotal. I just want VirusTotal to analyze it and write the result in comment and with status "In Progress" or "Pending" i SS the flow and i think it very possible. but i got confused error "The supplied status is invalid"  Also here my python sourcecode  """ """ import phantom.rules as phantom import json from datetime import datetime, timedelta @phantom.playbook_block() def on_start(container): phantom.debug('on_start() called') # call 'update_event_1' block update_event_1(container=container) return @phantom.playbook_block() def update_event_1(action=None, success=None, container=None, results=None, handle=None, filtered_artifacts=None, filtered_results=None, custom_function=None, loop_state_json=None, **kwargs): phantom.debug("update_event_1() called") # phantom.debug('Action: {0} {1}'.format(action['name'], ('SUCCEEDED' if success else 'FAILED'))) container_artifact_data = phantom.collect2(container=container, datapath=["artifact:*.cef.event_id","artifact:*.id"]) parameters = [] # build parameters list for 'update_event_1' call for container_artifact_item in container_artifact_data: if container_artifact_item[0] is not None: parameters.append({ "status": "in progress", "comment": "Tahap analisa via SOAR", "event_ids": container_artifact_item[0], "context": {'artifact_id': container_artifact_item[1]}, }) ################################################################################ ## Custom Code Start ################################################################################ # Write your custom code here... ################################################################################ ## Custom Code End ################################################################################ phantom.act("update event", parameters=parameters, name="update_event_1", assets=["soar_es"], callback=ip_reputation_1) return @phantom.playbook_block() def ip_reputation_1(action=None, success=None, container=None, results=None, handle=None, filtered_artifacts=None, filtered_results=None, custom_function=None, loop_state_json=None, **kwargs): phantom.debug("ip_reputation_1() called") # phantom.debug('Action: {0} {1}'.format(action['name'], ('SUCCEEDED' if success else 'FAILED'))) container_artifact_data = phantom.collect2(container=container, datapath=["artifact:*.cef.src","artifact:*.id"]) parameters = [] # build parameters list for 'ip_reputation_1' call for container_artifact_item in container_artifact_data: if container_artifact_item[0] is not None: parameters.append({ "ip": container_artifact_item[0], "context": {'artifact_id': container_artifact_item[1]}, }) ################################################################################ ## Custom Code Start ################################################################################ # Write your custom code here... ################################################################################ ## Custom Code End ################################################################################ phantom.act("ip reputation", parameters=parameters, name="ip_reputation_1", assets=["virtotv3-trialzake"], callback=decision_1) return @phantom.playbook_block() def decision_1(action=None, success=None, container=None, results=None, handle=None, filtered_artifacts=None, filtered_results=None, custom_function=None, loop_state_json=None, **kwargs): phantom.debug("decision_1() called") # check for 'if' condition 1 found_match_1 = phantom.decision( container=container, conditions=[ ["ip_reputation_1:action_result.summary.malicious", ">", 0] ], delimiter=None) # call connected blocks if condition 1 matched if found_match_1: update_event_2(action=action, success=success, container=container, results=results, handle=handle) return # check for 'else' condition 2 update_event_3(action=action, success=success, container=container, results=results, handle=handle) return @phantom.playbook_block() def update_event_2(action=None, success=None, container=None, results=None, handle=None, filtered_artifacts=None, filtered_results=None, custom_function=None, loop_state_json=None, **kwargs): phantom.debug("update_event_2() called") # phantom.debug('Action: {0} {1}'.format(action['name'], ('SUCCEEDED' if success else 'FAILED'))) comment_formatted_string = phantom.format( container=container, template="""Information from SOAR : \nSource : {0}\nHarmles : {1} \nMalicious : {2}""", parameters=[ "ip_reputation_1:action_result.data.*.attributes.crowdsourced_context.*.source", "ip_reputation_1:action_result.data.*.attributes.last_analysis_stats.harmless", "ip_reputation_1:action_result.data.*.attributes.last_analysis_stats.malicious" ]) container_artifact_data = phantom.collect2(container=container, datapath=["artifact:*.cef.event_id","artifact:*.id"]) ip_reputation_1_result_data = phantom.collect2(container=container, datapath=["ip_reputation_1:action_result.data.*.attributes.crowdsourced_context.*.source","ip_reputation_1:action_result.data.*.attributes.last_analysis_stats.harmless","ip_reputation_1:action_result.data.*.attributes.last_analysis_stats.malicious","ip_reputation_1:action_result.parameter.context.artifact_id"], action_results=results) parameters = [] # build parameters list for 'update_event_2' call for container_artifact_item in container_artifact_data: for ip_reputation_1_result_item in ip_reputation_1_result_data: if container_artifact_item[0] is not None: parameters.append({ "event_ids": container_artifact_item[0], "status": "Pending", "comment": comment_formatted_string, "context": {'artifact_id': ip_reputation_1_result_item[3]}, }) ################################################################################ ## Custom Code Start ################################################################################ # Write your custom code here... ################################################################################ ## Custom Code End ################################################################################ phantom.act("update event", parameters=parameters, name="update_event_2", assets=["soar_es"]) return @phantom.playbook_block() def lookup_ip_1(action=None, success=None, container=None, results=None, handle=None, filtered_artifacts=None, filtered_results=None, custom_function=None, loop_state_json=None, **kwargs): phantom.debug("lookup_ip_1() called") # phantom.debug('Action: {0} {1}'.format(action['name'], ('SUCCEEDED' if success else 'FAILED'))) container_artifact_data = phantom.collect2(container=container, datapath=["artifact:*.cef.src","artifact:*.id"]) parameters = [] # build parameters list for 'lookup_ip_1' call for container_artifact_item in container_artifact_data: if container_artifact_item[0] is not None: parameters.append({ "days": 10, "ip": container_artifact_item[0], "context": {'artifact_id': container_artifact_item[1]}, }) ################################################################################ ## Custom Code Start ################################################################################ # Write your custom code here... ################################################################################ ## Custom Code End ################################################################################ phantom.act("lookup ip", parameters=parameters, name="lookup_ip_1", assets=["abuseipdb"]) return @phantom.playbook_block() def format_1(action=None, success=None, container=None, results=None, handle=None, filtered_artifacts=None, filtered_results=None, custom_function=None, loop_state_json=None, **kwargs): phantom.debug("format_1() called") template = """Detail : {0}\nSeverity : {1}\nSource : {2}\nHarmles : {3}\nMalicious : {4}\n""" # parameter list for template variable replacement parameters = [ "ip_reputation_1:action_result.data.*.attributes.crowdsourced_context.*.detail", "ip_reputation_1:action_result.data.*.attributes.crowdsourced_context.*.severity", "ip_reputation_1:action_result.data.*.attributes.crowdsourced_context.*.source", "ip_reputation_1:action_result.data.*.attributes.last_analysis_stats.harmless", "ip_reputation_1:action_result.data.*.attributes.last_analysis_stats.malicious" ] ################################################################################ ## Custom Code Start ################################################################################ # Write your custom code here... ################################################################################ ## Custom Code End ################################################################################ phantom.format(container=container, template=template, parameters=parameters, name="format_1") return @phantom.playbook_block() def update_event_3(action=None, success=None, container=None, results=None, handle=None, filtered_artifacts=None, filtered_results=None, custom_function=None, loop_state_json=None, **kwargs): phantom.debug("update_event_3() called") # phantom.debug('Action: {0} {1}'.format(action['name'], ('SUCCEEDED' if success else 'FAILED'))) container_artifact_data = phantom.collect2(container=container, datapath=["artifact:*.cef.event_id","artifact:*.id"]) parameters = [] # build parameters list for 'update_event_3' call for container_artifact_item in container_artifact_data: if container_artifact_item[0] is not None: parameters.append({ "event_ids": container_artifact_item[0], "status": "Pending", "comment": "Safe from Virus Total", "context": {'artifact_id': container_artifact_item[1]}, }) ################################################################################ ## Custom Code Start ################################################################################ # Write your custom code here... ################################################################################ ## Custom Code End ################################################################################ phantom.act("update event", parameters=parameters, name="update_event_3", assets=["soar_es"]) return @phantom.playbook_block() def on_finish(container, summary): phantom.debug("on_finish() called") ################################################################################ ## Custom Code Start ################################################################################ # Write your custom code here... ################################################################################ ## Custom Code End ################################################################################ return      
Hi everyone, I'm working on integrating Splunk Enterprise with Splunk SOAR using the Splunk App for SOAR Export, and I'm running into an issue where alerts sent from Splunk aren't appearing in SOAR.... See more...
Hi everyone, I'm working on integrating Splunk Enterprise with Splunk SOAR using the Splunk App for SOAR Export, and I'm running into an issue where alerts sent from Splunk aren't appearing in SOAR. Setup Details: Using App-to-App connection (not direct API/port 443) SOAR server is configured and marked active in the Splunk App for SOAR Export SOAR user has the observer and automation roles SSL verification is disabled (self-signed cert) Splunk and SOAR are on the same VPC/subnet with proper connectivity Test Alert Sent from Search & Reporting: | makeresults | eval foo="helloo" | eval src_ip="1.1.1.1" | table _time, foo, src_ip The Issue: No events are appearing in SOAR Nothing listed in Event Ingest Status or as Ad hoc search result No errors in the Splunk Job Inspector What I Need Help With: Are there any extra steps required in the new SOAR UI to allow data from Splunk’s App for SOAR Export? Any known limitations or misconfigurations I might be missing? Any guidance would be greatly appreciated! Thanks in advance.
I have a Json file which contains a "date" field. The date field in my data that can either be of format %Y-%m-%d %H:%M:%S (e.g. 2025-05-23 9:35:35 PM) or %Y-%m-%d (e.g. 2025-05-23). The only way to ... See more...
I have a Json file which contains a "date" field. The date field in my data that can either be of format %Y-%m-%d %H:%M:%S (e.g. 2025-05-23 9:35:35 PM) or %Y-%m-%d (e.g. 2025-05-23). The only way to ingest this Json is via manual ingestion. When trying to set the _time field on ingest, setting the timestamp format to %Y-%m-%d %H:%M:%S will fail and default to the wrong _time value for date fields with format %Y-%m-%d. However, setting timestamp to format %Y-%m-%d won't capture the HMS part. Was there a way to coalesce these so that it will check if HMS is present, and if so, then apply %Y-%m-%d %H:%M:%S format? Or is there a workaround so at least the data ingestion for _time is accurate?
Situation: I have 2 data sets: Dataset 1 is a set of logs which includes IP addresses. When aggregated, there are 200,000+ IP addresses. Dataset 2 is a dataset we are pulling in once a day which in... See more...
Situation: I have 2 data sets: Dataset 1 is a set of logs which includes IP addresses. When aggregated, there are 200,000+ IP addresses. Dataset 2 is a dataset we are pulling in once a day which includes identifying information for those IP addresses including hostname for example. This dataset is even larger. I'm wanting to map the hostname from Dataset 2 to the IP address in Dataset 1. I feel like I've tried everything (join, append + eventstats, subsearching) and unfortunately all have a limit which prevent me from getting the full set mapped. Join limit: 50,000 Append limit: 10,000 Subsearch limit: 10,000 I've come across this same sort of issue before and have dropped projects because there doesn't seem to be an obvious way to get around these limits without increasing limits like the subsearch_maxout for example for our whole environment by at least 10x. I've started looking into the map command but the documentation seems extremely vague on the limits ("Zero ( 0 ) does not equate to unlimited searches.") The only thing I've gotten to work is to essentially manually break the 2nd data source up into groups of  10000 or less rows and append + eventstats each group of 10,000 one by one by one which is a complete nightmare of a query if you can imagine that plus, additional appends need to be created anytime the 2nd data set changes or grows. I'm growing tired of not having a good way of tackling this issue so I'm seeking any advice from any fellow Splunkers that have successfully "joined" larger datasets.   Some example searches to help with the situation: Dataset 1 search: index=my_logs | stats count by ip  Dataset 2 search: index=my_hosts | stats values(hostname) as hostname by ip    
This is what I have setup index=xxxxxx | eval HDate=strftime(_time,"%Y-%m-%d") | search NOT [ | inputlookup Date_Test.csv | fields HDate ] The search always returns 1 event. The Alert Co... See more...
This is what I have setup index=xxxxxx | eval HDate=strftime(_time,"%Y-%m-%d") | search NOT [ | inputlookup Date_Test.csv | fields HDate ] The search always returns 1 event. The Alert Condition is: if it see's  more than 1 event OR  0 event trigger an alert. Issue I'm facing now is on the Lookup table dates Lets say I have it setup on April 14th in my Lookup table file "Date_Test.csv" On April the 14th Still fired an alert, I'm not sure if its because it see 0 events ?  It suppose to Mute on that day. any insight and help would be much appreciated  
I just needed some help from Splunk regarding a request from our clients. So, a client is migrating from Splunk to Sentinel but has about 25 TBs of data still on Splunk cloud which they want to keep ... See more...
I just needed some help from Splunk regarding a request from our clients. So, a client is migrating from Splunk to Sentinel but has about 25 TBs of data still on Splunk cloud which they want to keep for at least a year. The data should be readable for investigations and compliance purposes. I know the client might need Splunk professional services for all options mentioned above since it's Splunk Cloud but what would be the best and most cost-effective solution for them? Can you please help and advise what could be the best way forward.
The AppDynamics Machine Agent supports remediation scripts, which allow you to define automated or manual actions in response to specific alerts or conditions. These scripts can be triggered when a p... See more...
The AppDynamics Machine Agent supports remediation scripts, which allow you to define automated or manual actions in response to specific alerts or conditions. These scripts can be triggered when a predefined health rule violation occurs, enabling proactive responses to issues in your environment. Below is an overview of how remediation scripts work in the Machine Agent and how to configure and use them:   What Are Remediation Scripts? Remediation scripts are custom scripts (written in languages like Shell, Python, Batch, or PowerShell) that are executed by the Machine Agent when triggered by Health rule violations These scripts can perform various actions, such as restarting services, freeing up memory, or notifying teams.   Use Cases for Remediation Scripts include: 1. Restarting Services or Applications: • Automatically restart a failed service (e.g., web server or database). 2. Clearing Logs or Temporary Files: • Free up disk space by removing unnecessary files. 3. Scaling Infrastructure: • Trigger an API call to scale up/down infrastructure (e.g., AWS, Kubernetes). 4. Sending Custom Notifications: • Send notifications to external systems like Slack, PagerDuty, or email. 5. Custom Troubleshooting Steps: • Collect diagnostics like thread dumps, heap dumps, or system logs. Step-by-step guide The steps to configure a remediation script are documented here → https://docs.appdynamics.com/appd/24.x/25.4/en/splunk-appdynamics-essentials/alert-and-respond/actions/remediation-actions. Practical example:  Use case: enable debug-level or trace-level logs on HR violation for troubleshooting purposes.  Setting the health rule.           Docs:https://docs.appdynamics.com/appd/24.x/25.4/en/splunk-appdynamics-essentials/alert-and-respond/configure-health-rules           1. Select HR type    Remediation actions are not available for servers. You can create and run a remediation action for a health rule with application, tier, or node as an affected entity. Ensure that you select the same entities when you define the Object Scope for the associated policy.           2. Affects Nodes            3. Select specific Nodes.            4. Select one or multiple nodes          5. Add conditions for the HR          6. Select a single metric or Metrics expression (here we select Single Metric value (cpu|%Busy))   2. Setting the action     Docs: https://docs.appdynamics.com/appd/24.x/25.4/en/splunk-appdynamics-essentials/alert-and-respond/actions/remediation-actions#id-.RemediationActionsv24.1-RemediationExample     1. Set the action name.     2. The path to the trace.sh file    3. The path to log files    4. Script timeout in minutes set to 5    5. Set email for approval (if required) and Save. 3. Setting the policies to trigger the action     Docs: https://docs.appdynamics.com/appd/24.x/25.4/en/splunk-appdynamics-essentials/alert-and-respond/policies/configure-policies     1. Policy name     2. Enabled     3. Select HR violation event.     4. Select specific Health Rules.     5. Selected the configured Health Rules.     6. Select specific objects.     7. From Tiers and Nodes.     8. Select Nodes.     9. Specific nodes.     10 Selected one or multiple nodes.     11. Add the action to be executed. On the agent's side. Create the trace.sh script and place it in the /local-scripts/ directory   #!/bin/bash   # Define the target file TARGET_FILE="matest/conf/logging/log4j.xml"   # Backup the original file cp "$TARGET_FILE" "${TARGET_FILE}.backup"   # Function to update the logging level update_logging_level() {     local level=$1     echo "Updating logging level to '$level'..."           # Use sed to change all loggers with level="info" to the desired level     sed -i "s/level=\"info\"/level=\"$level\"/g" "$TARGET_FILE"       if [ $? -eq 0 ]; then         echo "Logging level successfully updated to '$level'."     else         echo "Failed to update logging level."         exit 1     fi }   # Set the logging level to 'trace' update_logging_level "trace"   # Wait for 10 minutes (600 seconds) echo "Waiting for 10 minutes..." sleep 600   # Revert the logging level back to 'info' update_logging_level "info"   echo "Logging level reverted to 'info'."   When the action is triggered, the script will change the log level from info to debug and revert the change after 10 minutes. Prerequisites for Local Script Actions The Machine Agent must be installed running on the host on which the script executes. To see a list of installed Machine Agents for your application, click View machines with machine-agent installed in the bottom left corner of the remediation script configuration window. To be able to run remediation scripts, the Machine Agent must be connected to a SaaS Controller via SSL. Remediation script execution is disabled if the Machine Agent connects to a SaaS Controller on an unsecured (non-SSL) HTTP connection. The Machine Agent and the APM agent must be on the same host. The Machine Agent OS user must have full permissions to the script file and the log files generated by the script and/or its associated child processes. The script must be placed in <agent install directory>\local-scripts. The script must be available on the host on which it executes. Processes spawned from the scripts must be daemon processes.  
Hi splunkers.    I would like to understand a tricky point.   I'm using a distributed environment with 2 intermediate universal forwarders. They have to deal with 1.2 TB of data per day. 1 - Str... See more...
Hi splunkers.    I would like to understand a tricky point.   I'm using a distributed environment with 2 intermediate universal forwarders. They have to deal with 1.2 TB of data per day. 1 - Strangely, these UF have their parsing queues used (TOP 1 of the queues usage !) and these forwarders are UF !!!   2 - These UF have 4 pipeline. If one of these pipeline parsing queue is full, the entire UF refuse connection from upstream forwarders.   There queues size where increased to 1GB (input / parsing / output ...). But sometimes, this situation comes back.   Have you got any idea what could hapening ?
I recently started using the HEC with TLS on my standalone testing instance and now I am seeing some behavior that I cannot make sense of. I assume that it is related to the fact that I configured b... See more...
I recently started using the HEC with TLS on my standalone testing instance and now I am seeing some behavior that I cannot make sense of. I assume that it is related to the fact that I configured both, TCP Input and HEC Input to use different certificates. The HEC Input is working fine, but when a UF tries to connect to the TCP Input, I get this error: 05-22-2025 07:39:18.469 +0000 ERROR TcpInputProc [2339416 FwdDataReceiverThread] - Error encountered for connection from src=REDACTED:31261. error:14089086:SSL routines:ssl3_get_client_certificate:certificate verify failed - please check the output of the `openssl verify` command for the certificates involved; note that if certificate verification is enabled (requireClientCert or sslVerifyServerCert set to "true"), the CA certificate and the server certificate should not have the same Common Name. 05-22-2025 07:39:18.555 +0000 ERROR X509Verify [2339416 FwdDataReceiverThread] - Client X509 certificate (CN=REDACTED,CN=A,OU=B,DC=C,DC=D,DC=E) failed validation; error=19, reason="self signed certificate in certificate chain" 05-22-2025 07:39:18.555 +0000 WARN SSLCommon [2339416 FwdDataReceiverThread] - Received fatal SSL3 alert. ssl_state='error', alert_description='unknown CA'. 05-22-2025 07:39:18.555 +0000 ERROR TcpInputProc [2339416 FwdDataReceiverThread] - Error encountered for connection from src=10.253.192.20:32991. error:14089086:SSL routines:ssl3_get_client_certificate:certificate verify failed - please check the output of the `openssl verify` command for the certificates involved; note that if certificate verification is enabled (requireClientCert or sslVerifyServerCert set to "true"), the CA certificate and the server certificate should not have the same Common Name. On the UF, I can see the following error message: 05-22-2025 07:39:17.953 +0000 WARN SSLCommon [1074 TcpOutEloop] - Received fatal SSL3 alert. ssl_state='SSLv3 read server session ticket A', alert_description='unknown CA'. 05-22-2025 07:39:17.953 +0000 ERROR TcpOutputFd [1074 TcpOutEloop] - Connection to host=REDACTED:9997 failed Below are my config files. I appreciate any pointers as to what  I did wrong. Note: All files which are storing certificates are the "usual" order: For clientCert and serverCert First certificate, then private key For sslRootCAPath First issuing, then Root CA Standalone/Indexer: Server.conf [sslConfig] sslRootCAPath = /opt/splunk/etc/auth/mycerts/cert.pem Inputs.conf [splunktcp-ssl:9997] disabled=0 [SSL] serverCert = /opt/splunk/etc/auth/mycerts/cert0.pem sslPassword = REDACTED requireClientCert = true sslVersions = tls1.2 [http] disabled = 0 enableSSL = 1 serverCert = /opt/splunk/etc/auth/mycerts/cert1.pem sslPassword = REDACTED [http://whatthehec] disabled = 0 token = REDACTED UF: server.conf [sslConfig] serverCert = /mnt/certs/cert0.pem sslPassword = REDACTED sslRootCAPath = /mnt/certs/cert.pem sslVersions = tls1.2 outputs.conf: [tcpout] defaultGroup = def forwardedindex.2.whitelist = (_audit|_introspection|_internal) [tcpout:def] useACK = true server = server:9997 autoLBFrequency = 180 forceTimebasedAutoLB = false autoLBVolume = 5000000 maxQueueSize =100MB connectionTTL = 300 heartbeatFrequency = 350 writeTimeout = 300 sslVersions = tls1.2 clientCert = /mnt/certs/cert0.pem sslRootCAPath = /mnt/certs/cert.pem sslPassword = REDACTED sslVerifyServerCert = true  
Greetings, I have been reading through documentation and responses on here about filtering out specific events at the heavy forwarder (trying to reduce our daily ingest). In the local folder for... See more...
Greetings, I have been reading through documentation and responses on here about filtering out specific events at the heavy forwarder (trying to reduce our daily ingest). In the local folder for our Splunk_TA_juniper app I have created a props.conf and a transforms.conf and set owner/permissions to match other .conf files. props.conf: # Filter teardown events from Juniper syslogs into the nullqueue [juniper:junos:firewall:structured] TRANSFORMS-null= setnull transforms.conf # Filter juniper teardown logs to nullqueue [setnull] REGEX = RT_FLOW_SESSION_CLOSE DEST_KEY = queue FORMAT = nullQueue I restarted the Splunk service... but I'm still getting these events. Not sure what I did wrong. I pulled some raw event text and tested the regex in PowerShell (worked with -match). Any help would be greatly appreciated!
Hello All, I have a question which I am not able to find an answer for. Hence looking for ideas, suggestions etc from fellow community members. We use Splunk enterprise security in our organization... See more...
Hello All, I have a question which I am not able to find an answer for. Hence looking for ideas, suggestions etc from fellow community members. We use Splunk enterprise security in our organization and I am trying to build correlation search for generating a finding (or intermediate finding) in Mission Control based on Microsoft defender incidents. I am sure that you would know, Microsoft defender incident is a combination of different alerts and it can include multiple entities. I have a search which gives me all the details but I am struggling to auto populate the identities data from Splunk identities lookup. Sample data below. My question are: how can I enrich the data for identities in the incident with Splunk ES identities data. Is it not the right way to create this search? My objective is to have a finding in Splunk ES if defender generates any incident.  Assuming this works somehow, how can I create the drill down searches so that it gives soc the ability to see supporting data (such as signin logs for a user (say user1)) as this is a multi value field. Should I use Defender alerts (as opposed to incident) to create a intermediate finding and then let Splunk run the Risk based rules to trigger if an finding based on this? alerts can have the multi entities (users, Ips, devices etc) as well so might end up with similar issues again.  Any other suggestions which others would have implemented? incidentId incidentUri incidentName alertId(s) alerts_count category createdTime identities identities_count serviceSource(s) 123456 https://security.microsoft.com/incidents/123456?tid=XXXXXXX Email reported by user as malware or phish involving multiple users 1a2b3c4d 1 InitialAccess 2025-05-08T09:43:20.95Z ip1 user1 user2 user3 mailbox1 6 MicrosoftDefenderForOffice   Thanks     
Have a data that returns ip field and values as below. Ip = 0.0.0.11 Ip= 0.0.0.12 There is a lookup that contains field ipaddress with below values 0.0.0.11 0.0.0.13 If we see above 0.0.0.12 mi... See more...
Have a data that returns ip field and values as below. Ip = 0.0.0.11 Ip= 0.0.0.12 There is a lookup that contains field ipaddress with below values 0.0.0.11 0.0.0.13 If we see above 0.0.0.12 missed in the lookup, I need a search to return 0.0.0.11(existed in both query result and lookup) and  0.0.0.12 (entries which are not in lookup and update them as ip=0.0.0.0 in result) 
Hi everyone, I'm working on improving our incident response and monitoring setup using Splunk, and I have a few questions I hope someone can help with: Bulk Incident Data Retrieval During Downtim... See more...
Hi everyone, I'm working on improving our incident response and monitoring setup using Splunk, and I have a few questions I hope someone can help with: Bulk Incident Data Retrieval During Downtime: What’s the best way to retrieve a large volume of incident (via REST API) data from Splunk for a specific timeframe, especially during known downtime periods? Are there recommended search queries or techniques to ensure we capture everything that occurred during those windows? Querying Individual Event Data via Endpoints: How can we query Splunk endpoints (e.g., via REST API) to retrieve detailed data for individual events or incidents? Any examples or best practices would be greatly appreciated. Customizing Webhook Notifications: Is it possible to modify the structure or content of webhook notifications sent from Splunk without using third-party apps like Better Webhooks or Alert Managers? If so, how can this be done natively within Splunk?  Thanks in advance for any guidance or examples you can share!  Splunk Enterprise 6.2 Overview REST Endpoint Examples 
Hey everyone, I'm trying to configure a new server in the SOAR UI, but I'm running into this error: Error Message: There was an error adding the server configuration. On SOAR: Verify server's 'All... See more...
Hey everyone, I'm trying to configure a new server in the SOAR UI, but I'm running into this error: Error Message: There was an error adding the server configuration. On SOAR: Verify server's 'Allowed IPs' and authorization configuration. Status: 500 Text: JSON reply had no "payload" value I've already double-checked the basic config, but still no luck. From what I understand, this might be related to: Missing or misconfigured Allowed IPs on the SOAR server Improper authorization settings Possibly an issue with the server not returning the expected JSON format Has anyone faced this before or have any ideas on how to troubleshoot this? Any guidance or checklist would be super helpful Thanks in advance!
I've obtained this information from VirusTotal, and I want to create a playbook to check IP reputation and retrieve the results. I want to make a decision where if the result is greater than 0, it wi... See more...
I've obtained this information from VirusTotal, and I want to create a playbook to check IP reputation and retrieve the results. I want to make a decision where if the result is greater than 0, it will write a note stating 'It's malicious from VirusTotal.' You can see this example: Community Score or information like '4/94 security vendors flagged.' I want to compare it according to VirusTotal from the playbook. However, when I run it, it only shows 'detected urls: 2.' Can someone explain this?
Hello, I'm looking to set up a log retention policy for a specific index, for example index=test. Here's what I'd like to configure: - Total retention time = 24 hours - First 12 hours in hot+warm... See more...
Hello, I'm looking to set up a log retention policy for a specific index, for example index=test. Here's what I'd like to configure: - Total retention time = 24 hours - First 12 hours in hot+warm, then - Next 12 hours cold. - After that, the data should be archived (not deleted). How exactly should I configure this please? Also does the number of buckets need to be adjusted to support this setup properly on such a short timeframe? Thanks in advance for your help.  
Hi,  My dashboard has a few text boxes and I'm trying to make the inputs as user friendly as possible. I came across multiple issues which I have solved with previous posts however, there is a ... See more...
Hi,  My dashboard has a few text boxes and I'm trying to make the inputs as user friendly as possible. I came across multiple issues which I have solved with previous posts however, there is a conflict with the solutions that prevent me from implementing both at the same time.  #1 - If a text input is empty then that field should be ignored in the search. This can be fixed by adding a prefix and suffix  Ideally we can also input partial paths so there is also an implicit * character.  <input type="text" token="Get_Process_Path"> <label>Process Name or Path</label> <prefix>process_path="*</prefix> <suffix>*"</suffix> </input> https://community.splunk.com/t5/Dashboards-Visualizations/Evaluating-form-field-if-not-null/td-p/18164    #2 - Interpret back slash characters as text so we don't need to manually add \\ to every path. The |s filter for tokens fixed this.  process_path=$Get_Process_Path|s$ https://community.splunk.com/t5/Dashboards-Visualizations/How-do-you-escape-backslashes-in-user-input-and-then-use-that/m-p/142096    I can get both of these working on their own but not at the same time. Is there a way to do this or do I need a different approach?  Thanks.
Hi  I have the following data (Below). I have a situation where I want to search for "*" on a search and have it return all the data. resourceSpans{}.scopeSpans{}.spans{}.attributes{}.value.stringV... See more...
Hi  I have the following data (Below). I have a situation where I want to search for "*" on a search and have it return all the data. resourceSpans{}.scopeSpans{}.spans{}.attributes{}.value.stringValue = "*" However, this works for 99.9 % of my data, but the line below.  This path is not there. So when I run the command below, I get no results. However, I am looking for all data with the *. But as it's not there, it is excluding it. Is there any way I can still get the data back?   {"resourceSpans":[{"resource":{"attributes":[{"key":"process.pid","value":{"intValue":"600146"}},{"key":"service.instance.id","value":{"stringValue":"003nhhk3"}},{"key":"service.name","value":{"stringValue":"LAUNCHERMXMARKETRISK_MPC"}},{"key":"service.namespace","value":{"stringValue":"LAUNCHER"}},{"key":"telemetry.sdk.language","value":{"stringValue":"java"}},{"key":"telemetry.sdk.name","value":{"stringValue":"opentelemetry"}},{"key":"telemetry.sdk.version","value":{"stringValue":"1.34.0"}},{"key":"mx.env","value":{"stringValue":"dell945srv:13003"}}]},"scopeSpans":[{"scope":{"name":"mx-traces-api","version":"1.0.0"},"spans":[{"traceId":"10731f4b1d19380ceb33ae33672dbd5f","spanId":"cbf88ed07b403b48","parentSpanId":"3cfc7d85786b676b","name":"createSubmission","kind":1,"startTimeUnixNano":"1747152946314481406","endTimeUnixNano":"1747152946314775297","status":{}},{"traceId":"10731f4b1d19380ceb33ae33672dbd5f","spanId":"8ff7fabcab4b12d0","parentSpanId":"3cfc7d85786b676b","name":"createSubmission","kind":1,"startTimeUnixNano":"1747152946353054099","endTimeUnixNano":"1747152946353187644","status":{}},{"traceId":"10731f4b1d19380ceb33ae33672dbd5f","spanId":"4b14e49df1e1ffd8","parentSpanId":"3cfc7d85786b676b","name":"createSubmission","kind":1,"startTimeUnixNano":"1747152946474942393","endTimeUnixNano":"1747152946475042609","status":{}},{"traceId":"10731f4b1d19380ceb33ae33672dbd5f","spanId":"169b89bf118931d8","parentSpanId":"3cfc7d85786b676b","name":"createSubmission","kind":1,"startTimeUnixNano":"1747152946488875310","endTimeUnixNano":"1747152946488933120","status":{}}]}]}]}  
I am developing a custom streaming command. During tests and debugging I noticed the command works fine in this search: index="_internal" | head 1 | table host | customcommand and produces the foll... See more...
I am developing a custom streaming command. During tests and debugging I noticed the command works fine in this search: index="_internal" | head 1 | table host | customcommand and produces the following result: <class 'generator'> But when I use the command in the following search it produces no results: index="_internal" | head 1 | customcommand This is the code: @Configuration() class CustomCommand(StreamingCommand): def stream(self, events): yield {"event": str(type(events))} and this is commands.conf: [customcommand] chunked = true filename = customcommand.py python.version = python3 requires_srinfo = true streaming = true How can I fix that?
hello So i want to make a search . i am using  index=endpoint_defender source="AdvancedHunting-DeviceInfo" | rex field=DeviceName "(?<DeviceName>\w{3}-\w{1,})." | eval DeviceName=upper(DeviceNa... See more...
hello So i want to make a search . i am using  index=endpoint_defender source="AdvancedHunting-DeviceInfo" | rex field=DeviceName "(?<DeviceName>\w{3}-\w{1,})." | eval DeviceName=upper(DeviceName) this gives me devicenames. now  | lookup snow_os.csv DeviceName output OS BuildNumber Version from this lookup i am comparing devicenames and as ouput i am getting OS BuildNumber Version. and from these fields i want to compare them to this lookup to get whether this Operating System is outdated or not. how can i do this ?