When I run this search on my search head:
earliest=-24h index=os sourcetype=ps tag=uat COMMAND=httpd|dedup PID | timechart span=1h count by host
The resulting search causes my indexers to go from 1.2G of ram to 12.0G of ram (how much memory they have) and cause all kind of faults. Errors talking to license server, completing search, etc., etc., to the point of making any other search, scheduled or ad-hoc, fail as well. I have two indexers and am only ingesting OS data from about 50 unix/Linux hosts currently.
Has anyone else seen anything like this?
I think you're hitting a Splunkd bug that causes over-aggressive expansion. Click "Inspect Search" and look at the normalized search; if it's expanding into all sorts of unexpected sourcetypes, you should probably upgrade to the latest Splunk.