I am using a custom splunk command and I discovered that it has random behavior when there is more than about 4000 events.
Basically my code is :
#read the results into a variable
(results, dummyresults, settings) = si.getOrganizedResults()
for i in range(len(results)):
#return the results back to Splunk
My issue is that len(results) is wrong:
When I call my custom command :
xxxx | head 4000 | custom_command
I have 4000 event displayed as expected.
But the len(results) is 3779 and I see my index 'i' moving from 0 to 3779.
Then after the 3779 first events, len(results) is 221 and my index i is reseted to 0 and goes from 0 to 221.
Well as a result, all my calculation is wrong because I am performing statistic with sliding window and this calculation is base on index and len(Results).
Do you know how to retrieve the correct len of the "results" to be able to walk through the results.
Thanks for your reply.
Actually I have only one indexer.
I found what is happening.
In commands.conf there is a parameter streaming that was set to true.
And in this case, Splunk split the results array in small array.
I have set streaming to false and it has solved my problem. I get the good calculation.
But this is still quite strange because if I log the len(results) in a file, I see that my custom command is called several times and the len(Results) that is logged is not correct. But I get the expected results ...
I'm guessing you may be running into some limitation... be it RAM, or splunk limits.
If you run your command and then view the job inspector, it will show you how many events were returned by each command in your pipeline. There is also a search.log, etc. which may give you more clues.