When using smartstreamingcommand from the package in your updated answer we avoid the error at the subsearch limit but with a new problem:
Sometimes 50% (about) of records are processed UPDATE: appears to be always with v2 generating command
addinfo seems to record the number of records we get out is this a lead?
At other times all records are process - UPDATE: appears to be with v1 generating command
Have you seen something like this?
|mongoreadbeta testdata |table * |echo |table *
Can easily process 2 million records however sometimes we see:
It appears to always break on the 100K boundary.
Ex: 1 1100000 records - chunked generating command
Duration (seconds) Component Invocations Input count Output count
0.00 command.addinfo 13 600,000 600,000
47.69 command.echo 13 1,100,000 600,000
Ex: 2 1,041,865 records after search filter
Duration (seconds) Component Invocations Input count Output count
0.00 command.addinfo 12 641,865 641,865
56.21 command.echo 12 1,041,865 641,865
7.62 command.mongoreadbeta 12 - 1,100,000
0.28 command.search 12 1,100,000 1,041,865
When i run it with a non-chunked generating command it works:
Ex3 1100000 rows - v1 generating command
|mongoread testdata |echo |table *
Duration (seconds) Component Invocations Input count Output count
114.51 command.echo 23 1,100,000 1,100,000
26.14 command.mongoread 1 - 1,100,000
0.91 command.table 1 1,100,000 2,200,000
Version: 7.2.7
Build: f817a93effc2
Using new echo custom command implemented in this change...
https://github.com/TiVo/splunk-sdk-python/commit/5188f7d709cadd80e786692b371a64c4ae0991d2
... View more