To support large dataset (1mil + rows) using custom commands and Chunked=true
I implemented SmartStreamingCommand per: How come custom search commands (CSC) SCPv2 cannot handle large event sets
To resolve an issue where I used a SCPv2 generating command to feed a SCPv2 streaming command dropping records:
I removed the partial=True flush in internals.py for SmartStreamingCommand to work consistently.
Deleted this from internals.py for use in chunked commands :
if self._record_count >= self._maxresultrows: self.flush(partial=True)
What are the ramifications of this change
I have attempted to handle this issue in a more recent change to the original patch.
Let me know if that resolves the issues you were seeing. Good luck! 🙂
When I remove the partial=true flush I no longer need the smartstreaming command to process records. All records sent by splunk are available to the custom command and returned to splunk for the next spl. When you have a moment could you try this approach and see if I am missing something?
Smartstreaming only worked if the spl pipeline feeding it included no other SCPv2 custom commands
Sorry, I didn't originally see your follow up here.
I think that the underlying bug that I am working around in the patch referenced above produces different behaviors depending on many different factors (eg. custom command input event to output event ratio (does it turn each input event into multiple output events?), Splunk architecture (are there many indexers serving the query or just one?), the SPL structure (is the SPL command starting from a generating command or from a real search?), etc.) Based on all of these differences, the bug may or may not be triggered.
I didn't completely follow your request here, but I do not believe that simply adding or removing a flush() command at the right time will work around the current Splunk daemon bug. Instead, the custom command must carefully manage the timing of how it collects and returns information to its Splunk parent process to avoid the bug...