I'm running into behavior I don't quite understand and was hoping someone might be able to shed some light on it.
1.) I'm running a search as an admin on a default install of 7.2.0 Splunk (no changes to limits.conf). I perform that search on an index that would return over 40k events if it were to return every matching result of the query.
2.) If I run that search as is in the Splunk search bar, it shows the right number of events (as it does in the Job Manager as well). But if I try to navigate through all those results, on page 25 (listing 50 events per page) I get the following warning message in the pager: "Currently displaying the most recent 1250 events in the selected range. Select a narrower range or zoom in to see more events.". I have no ability to navigate beyond page 25 at that point.
3.) If I run that search with "| head 12626", all 12626 events are returned and can be navigated (allowing me to go well beyond page 25).
4.) If I run that search with "| head 12627", I get the "most recent 1250 events" warning message.
5.) If I compare the search job log file for the "| head 12626" and "| head 12627" searches, they are essentially identical. There are no indications that anything was truncated in either case. No mention of any limits being exceeded. The "| head 12626" search actually ends up showing more memory used in the job manager.
6.) If I run that search using a SearchManager and put the results into a TableView on a custom Splunk dashboard, the results are also truncated but differently. For instance, with the "| head 12627", I can navigate to page 229 in my TableView (which is still short of the 12627 events but considerably more than 1250).
7.) If I check the SearchManager when results are truncated for the "| head 12767" search I see: "eventCount: 12627", "eventIsTruncated: true", and "eventAvailableCount: 1227" (considerably less than the 11444 events that appear in my table).
I'm curious if anyone knows why I would be running into this behavior and if there is anything I can do to get around it? I'm specifically hoping for a solution that allwos me to display all the results of the search in the table on my custom dashboard.
Thank you very much for any help you can provide.
search.log in the
job inspector popup. I suspect that you are eating up all available RAM on your search head and the solution may be to increase your RAM. If this is a VM, this is easy to test.
Thank you very much for the response. I hadn't seen anything in the search.log that jumped out at me. I even diff'd the 12626 and 12627 logs to see if there was anything different between the two. But they both are essentially the same (different timestamps of course and a slightly different ordering of the user context messages).
But bumping into a memory limit is a good point. I just tested it now by doubling the RAM for my VM. Unfortunately I hit the exact same issue (full results for "| head 12626" and truncated results for 12627).