I have a large query that keeps failing/timing out because search head has no enough ram. I want to run the data in half instead (twice). running one query to filter only half the alphabets of values under FieldX, then run another one with rest of alphabets.
I can do something like FieldX=a* OR FieldX=b* ... but looking for something more practical to query
The reason for consuming more memory, your query is collecting all matching events to search head.
why don't you apply transforming commands after your base filters, this will reduce the number of results fetched from indexers to search head.
your problem isn't so clear for me: how do you can say that a search is slow for limited RAM available, have you any error message?
Are you respecting the minumum reference hardware for Search Head (16 CPUs and 12 GB RAM)? this would be the first question for you for the Splunk Support!
Usually the problem in searches is the availability of CPUs non RAM.
Aniway, coming back to your question: if you have many events, you have some methods to accelerate searches: Data Models, Summary Indexs, etc... that in few word seems to schedule a search that extracts the data for your searches and then you can run the search on the results, so you have a very quick search.
The best approach is to find a way to filter results in the main search, could you share you search that's give errors?
Anyway, you can filter your results using the "search" command with a free text (not so quick) or the "regex" command
| regex field=FieldX "^a|b.*)"
in addition, after the main search (in which you should try to reduce the number of results), you could also reduce the number of extracted fields, taking only the ones you need for you search with the fields command.