Splunk Phantom

For-cycles within playbooks, and variable in a single playbook and cross-playbooks

drew19
Path Finder

Hi,

what is the best way to:

  • keep a variable in a single playbook (e.g. a counter that is needed only in one single run of a playbook that I want to increase following a particular logic)?
  • keep a variable cross-playbooks (e.g. a counter that I need to update across several runs of a plyabook)? Currently I am using Custom Lists and I store these variables in the rows but it makes me to have manually reset to 0 the variables when it is needed;
  • to create for cycles to cycle over some action/blocks within a playbook?

Thank you in advance

Labels (1)
Tags (1)
0 Karma
1 Solution

phanTom
SplunkTrust
SplunkTrust

@drew19 without knowing your use case I would say there may be a better way to facilitate your requirement. 

However, in the mean time,  I would definitely look at the phantom.save_object() and phantom.get_object() API calls as you can make the value only relative to a specific container and/or playbook so you wouldn't need to reset every time as new context (container/playbook) would set it to Null again. 

https://docs.splunk.com/Documentation/Phantom/4.10/PlaybookAPI/DataManagementAPI#save_object 
https://docs.splunk.com/Documentation/Phantom/4.10/PlaybookAPI/DataManagementAPI#get_object 

View solution in original post

phanTom
SplunkTrust
SplunkTrust

@drew19 

1. There are 2 custom function types (Legacy & New). Legacy are bound to a playbook but have ALL API's available. The new, GIT capable, ones are limited in their API abilities.
2. As with all things Phantom, looping in this way CAN be done but it's certainly requires advanced knowledge of Phantom. Could you delete the folder and re-create? I am not saying it can't be done, it's just whether it should be done. Ofcourse if you could remove the limit then you wouldn't need to loop, but how long will that take to sort? Likely a while so you may have to add a tactical solution, such as you are now. You don't have to loop ofc, you could just do query --> decision (count >1) --> delete --> query --> decision (count >1) --> delete but that could get messy too ofc. 

I would look into if you can remove the 1000 limit from the app 1st then look to do something else. But that is just my opinion. Happy Phantoming!!!

drew19
Path Finder

@phanTom 

Thank you, I understand everything but I think that if someone put a limit of 1000 there should be a reason so I did not even suppose to go to remove that limit... If I do that it is very likely that I shorten and simplify my use case but break all the other actions of the app. 

I understand that for cycles cannot be done easily generally speaking, but I prefer to go for the longer way rather than remove "limits" from software not written by me.

Thank you for your help!

0 Karma

phanTom
SplunkTrust
SplunkTrust

@drew19 

1. Use the Legacy Custom Function (Not going to be deprecated anymore AFAIK) as the new Custom Functions are limited to certain API's only. I have used a Legacy CF many times for this capability.

2. I would need more information about what you are trying to achieve and why you would need to run things through the same action multiple times in the same playbook. You may find you are missing something intrinsic about how Phantom works that could make your life a lot easier 😄 

0 Karma

drew19
Path Finder

@phanTom 

  1. sorry but are you saying that it is written that cannot be used within a custom function but instead you use it and it works?
  2. I am trying to delete all the emails contained in a precise folder of a precise mailbox (much more than 1000. 1000 is the number that I found to be the undocumented number of emails that the EWS on-premise App can retrieve from a folder of a mailbox, indeed i made another question on this community for that topic) in one single container. Given that the upper limit is 1000, I am forced to cycle on the actions needed (run query, delete email) until the run query returns 0 emails. This is how the questions came up to me but now I am generally interested in how to implement "for cycles" is a single playbook as they would very useful for other use cases.

Thank you

 

0 Karma

phanTom
SplunkTrust
SplunkTrust

Also, technically the item can be retrieved from other playbooks run on other containers if you stipulate the relevant container_id value in the context in the get_object() call but as a playbook will run against a single container each time, it's much easier to just pass in the container_id being processed using 'container['id']' for both saving and retrieving the value(s) using save_object/get_object.

drew19
Path Finder

@phanTom  thank you. Now I got the point.

 

Going back to last residual questions (and I will stop annoying you then :D):

  • You said to me that I need a custom function to save an object but in the documentation you sent to me (Data management automation API - Splunk Documentation) it is written "The save_object API is not supported from within a custom function". So, how I am supposed to use it? Where can I save an object in the code of a playbook?
  • Going back to the for cycle, what is the best way to implement it, according to your experience?

Thank you

phanTom
SplunkTrust
SplunkTrust

@drew19 as per the docs link the object will stay available forever unless use the clear_object call or the auto_delete it set to True in the API call. If you set auto_delete to True then you MUST provide a container_id as this will be used to remove it once that container is closed:

Defaults to False. If set to True, the data is deleted when the container is closed. You can use the clear_object parameter to delete the data. If the parameter is set True, you must provide the container ID.

 There is no scope issue with saved objects and unless you set auto_delete to True, it will persist forever or until you run clear_object using the context it was saved under.

phanTom
SplunkTrust
SplunkTrust

@drew19 both container_id and playbook_name are optional and are only context so as long as when you retrieve the data you use the correct value in the get_object it doesn't matter where you retrieve it from. Using the container id is easier as it's available as a variable in the playbook to pass to the save_object (container['id']). You can still retrieve and use the playbook name but you would need another API call to get this and pass into the save_object call.

0 Karma

drew19
Path Finder

@phanTom  I am still not precisely understanding.

I have clear the behaviour if only the Container ID is specified as context (Its scope will be within that container ID and its "life" will be more or less the same of the container), but what happens to the object if I do specify only a Playbook name? What will the scope of the object and its "life" be? It will live along all that Playbook runs?

0 Karma

phanTom
SplunkTrust
SplunkTrust

@drew19 a lot to unpack there so I will try:
1. Yes this would require a custom function to both set & get.

2. The context is purely for retrieval of the value using get_object. If you set both container ID and Playbook name then when use the get_object both need to be defined in order to retrieve the value. In your case I would leave it to just a container as all the playbooks will run against the same container allowing you to retrieve the value in any playbook run against this container. 

3. The value persists until the container it is attached to is closed. 

4. I would highly recommend NOT looping in a playbook in the way you are referring as you could cause infinite loops and/or race conditions. Without knowing the full use case and seeing what you have done already I can't properly comment on another way to do the same thing but if you can find a way to not loop in a playbook (as in decision_1 -> action -> decision_1) then that would be better long-term, and I am sure you still get a warning message in the VPE about this when you try? 

0 Karma

drew19
Path Finder

@phanTom ,

what if I specify only a playbook as context? Do I achieve the "cross-containers" variable? If yes, how this object can be reset to its original value?

 

For what concerns for-cycles I'm just designing some use-cases but I have still not yet developed them. I would need a for cycle (i.e. some actions repeated on the same container until a condition is met) and I was just guessing how to achieve it (if possible).

 

Thank you

0 Karma

phanTom
SplunkTrust
SplunkTrust

@drew19 without knowing your use case I would say there may be a better way to facilitate your requirement. 

However, in the mean time,  I would definitely look at the phantom.save_object() and phantom.get_object() API calls as you can make the value only relative to a specific container and/or playbook so you wouldn't need to reset every time as new context (container/playbook) would set it to Null again. 

https://docs.splunk.com/Documentation/Phantom/4.10/PlaybookAPI/DataManagementAPI#save_object 
https://docs.splunk.com/Documentation/Phantom/4.10/PlaybookAPI/DataManagementAPI#get_object 

View solution in original post

drew19
Path Finder

Dear Tom,

thank you. So just to recap and chech if I correctly understood:

  • I can save a variable with the save_object() API and this can be done only through custom functions of a playbook right?
  • I can save a variable with the save_object() and decide to set its context to be a playbook or a container, right?
    • If the context is a container, that is fine since every playbook run on a now container will deal with a new_object (to be deleted optionally setting auto_delete=true) but what happens when saving the object in a playbook context? It is mantained cross-playbook until explicitly deleted or reset?
  • For what concerns for-cycle I can trivially use a Decision block followed by some actions and link the last action back to the Decision block so that the desired actions run until a the condition in the decision block is met, right?

Thank you again

 

0 Karma
.conf21 CFS Extended through 5/20!

Don't miss your chance
to share your Splunk
wisdom in-person or
virtually at .conf21!

Call for Speakers has
been extended through
Thursday, 5/20!