Building for the Splunk Platform

Python SDK dbxquery results limited to 100k rows using jobs.export- Do I need to paginate streaming results?


Running a dbxquery through jobs.export my results are limited to 100k rows. Do I need to paginate streaming results? 

Here's my code:



data = {
        'adhoc_search_level': 'fast',
        'search_mode': 'normal',
        'preview': False,
        'max_count': 500000,
        'output_mode': 'json',
        'auto_cancel': 300,
        'count': 0

job =<dbxquery>, **data)
reader = results.JSONResultsReader(job)
lst = [result for result in reader if isinstance(result, dict)]




This runs correctly except that that results are always stopped at 100k rows, it should be over 200k.

Tags (3)
0 Karma
Get Updates on the Splunk Community!

BSides Splunk 2022 - The Call for Papers is now Open!

TLDR; Main Site: CFP Site: CFP Opens: December 15th, ...

Sending Metrics to Splunk Enterprise With the OpenTelemetry Collector

This blog post is part of an ongoing series on OpenTelemetry. The OpenTelemetry project is the second largest ...

What's New in Splunk Cloud Platform 9.0.2208?!

Howdy!  We are happy to share the newest updates in Splunk Cloud Platform 9.0.2208! Analysts can benefit ...