Looking for confirmation that I've found the right setting.
When i run:
| stats count
I see 400,000 events.
When I run
It only returns 50,000.
Looking through documentation and other posts, it appears that the bottleneck is the maxresultrows setting in limits.conf but there's nothing that confirms this. Am I in the right place or is there another setting that I should adjust?
Unfortunately, I do not believe this is a setting you can change. To test I went changed every value in limits.conf from 50000 to 50100. scrub still came back with only 50,000 results.
Additionally, I believe this is a constraint of the command itself. Because it is calling a python script on the backend which is using the 1.x SDK which limits transforming searches to 50k results. I believe the 50k limit is a limit of the SDK and is not configurable anywhere.
Sorry and goodluck! -David
This "Best of Splunk" .conf 2017 talk on the python sdk v2 lists the 50k limit as a negative of v1
Sharing the answer I found after working with the Splunk team to dig this out.
There's no call to the python SDK so that doesn't appear to impact anything.
Turns out that the answer is maxresultrows setting in limits.conf. This limits the search to 50,000.
However, there's a second limitation underneath the commands.conf file that is required as well.
maxinputs = integer
* Maximum number of events that can be passed to the command for each invocation.
* This limit cannot exceed the value of maxresultrows in limits.conf.
* 0 for no limit.
* Defaults to 50000.
The smallest of the values of maxresultrows and maxinputs will be the value that is returned.
Hopefully this saves someone a few minutes of clicking.