Splunk Dev

Python SDK dbxquery results limited to 100k rows using jobs.export- Do I need to paginate streaming results?

joecav
Engager

Running a dbxquery through jobs.export my results are limited to 100k rows. Do I need to paginate streaming results? 

Here's my code:

 

 

data = {
        'adhoc_search_level': 'fast',
        'search_mode': 'normal',
        'preview': False,
        'max_count': 500000,
        'output_mode': 'json',
        'auto_cancel': 300,
        'count': 0
    }

job = service.jobs.export(<dbxquery>, **data)
reader = results.JSONResultsReader(job)
lst = [result for result in reader if isinstance(result, dict)]

 

 

 

This runs correctly except that that results are always stopped at 100k rows, it should be over 200k.

Tags (3)
0 Karma
Get Updates on the Splunk Community!

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...