Splunk SOAR (f.k.a. Phantom)

Are there any locking or concurrency guarantees when playbooks are operating on a container?

buzz_gt
New Member

Question: Are there any locking or concurrency guarantees when playbooks are operating on a container?

Issue I am trying to solve: I have a playbook (call it sub-playbook) that is called by two different parent playbooks ( parent1 and parent2). So parent1 will set param1 to True using save_object, keyed to containerN ( phantom.save_object(container_id=container['id'], key='param1', value={'param1':True}). parent2 sets param1 to False (also keyed to containerN) and calls sub-playbook. sub-playbook gets the value of param1, and acts different depending on whether it's True/False.

sub-playbook is called synchronously by both parent1 and parent2

As far as I know there are no concurrency guarantees (e.g. playbook1 -> sub-playbook could get interrupted after setting param1, but before sub-playbook checks/reads param1). This would cause a race condition. Is that accurate or does phantom lock all access to a container to the playbook (and sub-playbooks) that are operating on it?

Assuming my assumption is correct, and there are no concurrency guarantees, are there any built in patterns that people use to work around this? I can envision using some external capability (redis, maybe a custom app that uses sqllite, etc) to add my own locking, but I don't want to reinvent the wheel if it's not necessary.

Thanks!

Labels (3)
0 Karma
Get Updates on the Splunk Community!

Splunk Forwarders and Forced Time Based Load Balancing

Splunk customers use universal forwarders to collect and send data to Splunk. A universal forwarder can send ...

NEW! Log Views in Splunk Observability Dashboards Gives Context From a Single Page

Today, Splunk Observability releases log views, a new feature for users to add their logs data from Splunk Log ...

Last Chance to Submit Your Paper For BSides Splunk - Deadline is August 12th!

Hello everyone! Don't wait to submit - The deadline is August 12th! We have truly missed the community so ...