When running an inline search the results limit is high as we have in
limits.conf the following.
[searchresults] maxresultrows = 50000000 # Defaults to 50000
However, when we schedule the same search and direct the output to a lookup table via
outputlookup <filename>, the lookup file is only of 50K lines.
Can we change this value?
I don't understand these terms. What does "search jobs server" mean? What does "batch head" mean?
The outputlookup command runs on the search head (or standalone Splunk instance) where the search is executed.
However, other commands ALSO have maxresultrows, such as the stats command. The use of any of these commands prior to the outputlookup command would mean that the number of results overall would be constrained.
Perhaps you should consider multiple searches, each with its own outputlookup. Perhaps you could run one search per host, for example. Just be sure to name each outputlookup file differently. You might consider using the foreach command (or maybe the map command) to accomplish this.
Perhaps posting your actual search would give the community more ideas about how to help.
We have these batch heads, jobs servers, which are SHs that run saved searches and their configurations is different than the SHs. So, apparently, we need so sync their configurations with the SHs.
We use something in the spirit of Scheduled searches with a jobs server and pooling
ddrillic, Iguinn2, I am trying to do the |inputlookup largefile.csv |outputlookup largefile, as per https://dev.splunk.com/enterprise/docs/developapps/kvstore/migrateyourappfromusingcsv/ . The csv file is +200MB with +1.5mil rows, to try reduce the bundle size. is it stable or advisable to do maxresultrows = 5000000 ?
Our Sales Engineer told me -
-- The issue is that the search head isn’t executing the search, your search jobs server is.
You need to modify that stanza on the jobs server.
Does it mean that every command which involves
outputlookup <filename> runs on the batch head?