I am running into issues with applying a search head cluster bundle.
This bundle has around 200 MB including Splunk Enterprise Security and they run in AWS.
When I apply the usual apply shcluster-bundle command, everything works fine, except that it takes ~2 hours to push it ( 3 SH )
SH deployer is running on t2.medium and searchheads on m4.xlarge. CPU is not overwhelmed during the push at all and i have also verified the bandwidth with iperf3 and it is more than allright ( ~500 Mb/s ). There are no searches running at the moment and no data are being indexed. I am just building and testing the infrastructure.
I have tailed the splunkd.log during the push on the deployer and also there was no WARN or ERROR regarding that.
Do you have any idea what else to test and where could potentially be the root cause ?
Thank you for any feedback,
My deployer is also a T2.micro and I also have underpowered EC2 instances and saw very long bundle application. I found that after upgrading to 7 it was amazingly faster. I asked an architect if there was a chance that caused this and he believes there were improvements. So...give us a shout if you see it going better after upgrade?
Thanks for your reply. I've tested it out with Splunk 7 and unfortunately it is still slow as hell 😞
The problem I see is that this commands is not very noisy in logging and I can't find what "internal" issues this command have in order to work properly ? If there are some timeouts or smth.
Very fair. I would suggest trying to switch the t2.micro to an instance type that meets our minimum specs. My intention here is not for you to keep it that way, but to sanity check before the next step, which is opening a support ticket. In other words, if you open a support ticket, they may assert that the instance is not our min specs, so let's start by making sure it is and circumvent that retort. Fair?