Splunk Dev

Python SDK dbxquery results limited to 100k rows using jobs.export- Do I need to paginate streaming results?

joecav
Engager

Running a dbxquery through jobs.export my results are limited to 100k rows. Do I need to paginate streaming results? 

Here's my code:

 

 

data = {
        'adhoc_search_level': 'fast',
        'search_mode': 'normal',
        'preview': False,
        'max_count': 500000,
        'output_mode': 'json',
        'auto_cancel': 300,
        'count': 0
    }

job = service.jobs.export(<dbxquery>, **data)
reader = results.JSONResultsReader(job)
lst = [result for result in reader if isinstance(result, dict)]

 

 

 

This runs correctly except that that results are always stopped at 100k rows, it should be over 200k.

Tags (3)
0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.

Can’t make it to .conf25? Join us online!

Get Updates on the Splunk Community!

Can’t Make It to Boston? Stream .conf25 and Learn with Haya Husain

Boston may be buzzing this September with Splunk University and .conf25, but you don’t have to pack a bag to ...

Splunk Lantern’s Guide to The Most Popular .conf25 Sessions

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...

Unlock What’s Next: The Splunk Cloud Platform at .conf25

In just a few days, Boston will be buzzing as the Splunk team and thousands of community members come together ...