to add to woodcock's comments...
What exactly does that mean: 10,000+ Windows logs? Are you referring to logs from 10k+ Windows hosts?
What kinds of logs, what log volume are you expecting per day?
But simply the fact that you mentioned 10,000+ anything, I would answer by saying that you need more than just a white paper. Are you already a customer? If so, you should have access to a pre-sales technical resource to help you with this.
If not, starting at our documentation here is not a bad idea. You'll probably need to read up on
- proper sizing given your expected daily log data volume, data retention, search volume and concurrent users
- recommended hardware specs
- general Splunk architecture (forwarding, indexing, search)
- managing your deployment (configuration management with Deployment Server)
- how to get data in properly (timestamping, source typing, line breaking, etc.)
If you update your question with a bit more detail of what your target deployment is supposed to provide, the community may be able to give you a more targeted answer.
Good luck, and welcome to the world of Splunk! 🙂
How many servers? What is the post-backlog daily bandwidth (GB/day of raw data) overall? Are these actually "logs" or is it WMI? Do you have any aggregation agents currently deployed (e.g. Snare)? If you would like to get any kind of useful responses you will need to give much more detail.