Hello. I have a playbook that must be the only running instance of that playbook. I can't seem to find any "lock" functionality to facilitate this. Does anyone know if any sort of lock functionality exists out of the box? Thanks in advance!
I know this is old, but I solved this with a custom app to short circuit a playbook. The app checks a list to see if the identifiers for the current event have already been seen. If it is already in the list then it outputs a True and the playbook can be halted and handled as a duplicate. If it hasnt been seen before, the app outputs false and the playbook can proceed as the first instance of this event ingested.
This all works because apps do have locks. If the app is set to lock and only have one concurrent run, then adding this check as the first step in the playbook will break the race condition you are dealing with.
The only significant downside is that this will effectively turn your ingest pipeline into a serial stream which can be problematic for especially high volumes of events.
Howdy! Is it just a single action that needs to be locked or genuinely the entire playbook? Could you provide a little more details?
To cut to the chase: we don't have a "lock playbook" kind of action OOB to limit it to 1 concurrent run, but we might be able to figure out how to accommodate your use case.
The whole playbook does need to be locked. I envisioned a "lock" action at the beginning and an "unlock" action at the end of the playbook. The playbook is deduplicating events. Basically, the playbook checks if the event is a duplicate, and if it is, it updates the event to reflect it is a duplicate. We are using on poll to pull events from Splunk, so it's common for multiple events to be created at the same time. If two events come in at the same time that are duplicates, the "check if duplicate event" action may be run at the same time for both events, prior to either of them updating the event with the result of the check, which causes inaccurate results.
Just in case you ask about deduplicating the data prior to pulling the data into Phantom, I'll note that's not always possible for us.
We are facing the same issue here.
We decided to deal with the issue of different playbooks running at the same time by adding a small random sleep as a Quick Fix but this is ugly.
Therefore, I would like to know if you or anybody has ever managed to deal with a lock/semaphore to deal with this issue of having many playbooks running on the same container at the same time?
@djacquens if you are using the python sleep then I would highly recommend you undo asap as it will cause platform instability and halt automation for the duration of the sleep. If using something else then please ignore the above 😄
First, I would need to understand why playbooks are running on the same container more than once.
An immediate fix is to add a tag to the container when the playbook first runs, then have a decision at the front that checks if the tag exists and will just end if it does.
If you are adding artifacts to the container during automation and DON'T want the pb to run again make sure you are setting the 'run_automation' flag to false when creating the artifacts.
Hope this helps.