Hi, please enlighten me. Does AWS' Self Healing feature will solve the problem as it can spinoff another server if the memory or cores are exhausted?
Our current architecture is 3 search heads with 3 indexers with a possibility of a thousand users having real time searches.
They are all c4.2xlarge. Ive read the docs but I want someone's opinion. Please help.
Unfortunately Splunk can't do traditional scaling becuase while it might technically work, the newly created servers won't have data to search. There is a way you could achieve automatic scaling but it would require indexer clusters and the new smart store functionality.
If you are worried about users running real time searches then you can restrict their ability to do so by setting quotas in the roles. However, for many customers they manage this problem by educating users about the computational expense of real time searches. Most users are happy to change their ways.
With the hardware you have, it is my opinion is that it would be suitable for an organisation of the size that would ingest about 200 to 300 GB per day of data. (This is a very rough ballpark estimation)
Hope this helps.
Thank you for answering. We only have 30GB per day of data to ingest but there will be a lot of users(like thousands) who will use the instances.
Can I please have your opinion regarding this? We're thinking of automatically adding a server(search head/indexer) via AutoScalingGroup. Do we need a preconfigured instance which was already in the cluster but just turned off? Or should it be a new instance that has a script within it to automatically joins a search head/indexer cluster?