Splunk Dev

How to reduce custom search command result chunk size?

joepjisc
Path Finder

We are developing a custom search command to create events, this is using a streaming command with version 2 of the protocol, as the source is quite slow we'd like to send smaller chunks of results back to Splunk than the default 50,000, e.g. chunks of 1,000 events, so that users can view the partial results sooner.

We've tried various approaches including an incrementing integar and calling self.flush() when it is divisable by 1,000, but that caused a buffer full error.

Any suggestions would be really appreciated

 

 

...
@Configuration(type='streaming')
class OurSearchCommand(GeneratingCommand):
    ...
    for item in OurGenerator():
        item['_time'] = item['timestamp']
        yield item

 

 

Labels (2)
0 Karma
1 Solution

DexterMarkley
Engager
I know this is an old question, but for anyone else looking for the answer you need to overwrite the record_writer for the class. This is working for me, but I am not sure if there are any other implications of doing this.
 

 

self._record_writer._maxresultrows = 1000 

 

View solution in original post

DexterMarkley
Engager
I know this is an old question, but for anyone else looking for the answer you need to overwrite the record_writer for the class. This is working for me, but I am not sure if there are any other implications of doing this.
 

 

self._record_writer._maxresultrows = 1000 

 

analyst
Loves-to-Learn Everything

@DexterMarkley  may you provide the location of file needed to be changes?

0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...