There is this common belief that too many indexes cause performance issues. Is it true and what are the recommendations?
What @richgalloway said is true. Here is my goldilocks rule for enterprise scale: Tens is too few, Thousands is too many, Hundreds is just right.
Thank you @richgalloway but isn't the real issue the number of buckets and handling them correctly?
So, is there anything specific about a thousand or so indexes or it's only about the excessive number of buckets?
Yes, it comes down to buckets. However, more indexes means more buckets. Depending on your data volume you may be able to store the same data in fewer indexes and fewer (but larger) buckets.
Too many indexes means Splunk has to keep too many files open at once. It may also mean searching several files for data rather than only searching a few (or one).
Many sites tend to create a new index for each sourcetype, which can lead to having a lot of indexes. For more efficient searching, related data (that is, often searched for together) should be in the same index. Create a new index when
1) New access controls are needed for data
2) Different retention settings are needed for data
3) Data is of such a volume (certain network data, perhaps) that a separate index is warranted