We have systems with dozens of largely inactive users on similar size machines, but the heavy users tend to cluster on the indexer with 48G RAM and 24 cores. Based on Splunk information, as linu1988 points out, you should count on 1 core per active search during the life of that search and about half a GB of RAM - plus you'll need some level of overhead for the indexing volume. So, with 16 cores, you will easily be able to run a couple dozen concurrent searches if the indexing load isn't that high. However, this includes all scheduled searches/alerts, interactive, background, scripted, and dashboard searches.
My advice is to try it and see what your load ends up being. If you hit resources issues for doing searches, fire up a search head VM to run interactive stuff or to off-load saved search/alert functions. You could have a search head you use just for certain dashboards, for example.
... View more