Splunk Search

Upper Limit for REST API limits.conf - maxresultrows

Chris_Olson
Splunk Employee
Splunk Employee

Is there an upper end limit on this value? In certain use cases, there might be a need to return a very large number of results.

0 Karma
1 Solution

gkanapathy
Splunk Employee
Splunk Employee

I would not raise the limit. Instead, you can simply make multiple calls to the GET endpoint, in blocks smaller than the default maxresultrows limit of 50,000 until you have exhausted the number of events returned, i.e, the first call uses offset=0&count=50000, the next uses offset=50000&count=50000, then offset=100000&count=50000, etc. Your program that calls the endpoint can output each block as it get it.

View solution in original post

MCW
Loves-to-Learn

Hi experts,

After submitting a search query via REST API, is there a way to check number of events the search results for the job id?

Without which, I won't know if  how many GET each limited to 50K results which is something I run into as well.

Alternatively, is there an argument that I can use in HTTP GET to splunk to override the 50K limit?

Thanks,

MCW

0 Karma

gkanapathy
Splunk Employee
Splunk Employee

I would not raise the limit. Instead, you can simply make multiple calls to the GET endpoint, in blocks smaller than the default maxresultrows limit of 50,000 until you have exhausted the number of events returned, i.e, the first call uses offset=0&count=50000, the next uses offset=50000&count=50000, then offset=100000&count=50000, etc. Your program that calls the endpoint can output each block as it get it.

gkanapathy
Splunk Employee
Splunk Employee

This is incorrect. Splunk lets you query results and events from the REST API before the search has completed. You can see this in effect whenever you perform a large search from the UI (which itself uses the REST API). By trying to engineer smaller searches yourself (which is what you'd do if you were, say, querying against MySQL or a traditional RDBMS, and which is unnecessary in Splunk) you are complicating your code, putting extra load on the server, and possibly preventing your query from effective map-reduce execution.

andras_kerekes
Explorer

One problem with the above is, that Splunk will search over all events, and in case you have a few million events and want to query the first 200000, the search will take rather long time (of course depends on the machine it runs on).

You need to add | head n with appropriate n, e.g. 200000 so that Splunk will return the results as soon as it found the first 200000 events. Further optimization could be to dynamically calculate n e.g. 50k, 100k, 150k, 200k in each respective iteration.

0 Karma
Get Updates on the Splunk Community!

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...

Let’s Get You Certified – Vegas-Style at .conf24

Are you ready to level up your Splunk game? Then, let’s get you certified live at .conf24 – our annual user ...