I am working on a playbook and I'm facing a challenge in synchronizing and comparing the outputs from two different actions, in particular domain reputation checks via VirusTotal and Cisco Umbrella apps, executed on multiple artifacts within a container. (the mentioned apps are just an example)
Below the two challanges that I'm facing:
- Synchronizing and Comparing Action Outputs: my main issue is obtaining an output that allows me to verify and compare which IOCs have been flagged as malicious by both the VirusTotal and Cisco Umbrella apps. The current setup involves running both actions on each artifact in the container, but I'm struggling with how to effectively gather and compare these results to determine which IOCs are considered high-risk (flagged by both apps) versus low-risk (flagged by only one app).
- Filtering Logic Limitation in Splunk SOAR: Another issue is that the SOAR filtering logic is applied at the container level, not at the individual artifact level. This is problematic when a container has multiple IOCs, as benign IOCs might get included in the final analysis even after the application of the filter. I need an effective method to ensure that only artifacts identified as potentially malicious are shown for the final output.
Below an example of the scenario and the desired output:
- A container contains multiple artifacts.
- Actions executed: VirusTotal and Cisco Umbrella reputation checks on all artifacts.
- Expected Output: A list or a summary indicating which artifacts are flagged as malicious by both apps, which would be classified as high-risk, and which are flagged by only one app, classified as low-risk.
I am looking for advice on how to structure the playbook to efficiently filter and analyze these artifacts, ensuring accurate severity assessment based on the action of the app results. Do you have any insights, examples, or best practices on how to define the filtering logic and analysis process in Splunk SOAR?
Thank you for your help