No, Its a bit more complicated that that..
But I would start with the easy ones first:
1.) How big is your licence (and in particular, what is your average daily usage?) If your licence is only 200GB you need to plan carefully how you ingest that data without violating your licence - or maybe you take the hit, index it all on one day and accept warnings for 30 days?
2.) Are your indexers clustered? - If they are, you will need to find out the replication factor for your target index - if your RF was 2, you would need at least 2x the indexed space for the data as the cluster will hold two copies, not to mention additional space for SF serviceable copies too.
3.) If your indexes are not clustered, then its a bit easier, as it comes 'closer' to being the sum of available disk space - however:
4.) You will also need to allow for more than just the data volume, as tsidx files consume space, and you need a little wiggle room for processing etc.
5.) what is the data, and how will you be getting it in? Is it already extracted and data you have processed before - you don't want to import 11TB and find out your line breaking was wrong !
6.) It might not actually be 11.5TB. Just because thats what it looks like on disk, Splunk could be under (or in rare cases) over that size - normally under!
7.) if you have a few indexers with an abundance of space, why not create a new index just on those indexers. That way you don't have to worry about running out of disk on your 'fuller' peers.
If my comment helps, please give it a thumbs up!