Hello everyone,
I have a need to increase the compute capacity of an HF running in AWS (it is only forwarding, not indexing). Splunk PS recommended putting a 16CPU machine into service.
I'm not sure if the vCPU count shown by the AWS instance-type page reflects the number of cores that will be exploited by Splunk, or the number of threads that are available. Basically, I don't know if I want a 16 vCPU machine (c6i.4xl) or a 16 physical core machine (c6i.16xlarge) to get Splunk using the recommended 16CPUs.
Does anyone have a quick answer? Google wasn't my friend here!
Mike
How about a really slow response.... Maybe someone else will stumble across it.
Many Amazon EC2 instances support simultaneous multithreading, which enables multiple threads to run concurrently on a single CPU core. Each thread is represented as a virtual CPU (vCPU) on the instance. An instance has a default number of CPU cores, which varies according to instance type. For example, an m5.xlarge instance type has two CPU cores and two threads per core by default—four vCPUs in total.
taken from AWS docs @ https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-optimize-cpu.html
The AWS instance-type page states that the c6i.4xl that you were looking at is an ice-lake 8375C processor. What you are getting is use of 16 threads from a 32 core/64 thread processor. https://en.wikipedia.org/wiki/List_of_Intel_Xeon_processors_(Ice_Lake-based)
Hi
I suppose that there is no need to select so big instance for working as a HF on AWS. I have used instances with 2-4vCPU without any real issue. IMHO it's better to use couple of smaller instances as LB configuration than one huge. Of course if you have some apps like TA-aws / DBX running on those HFs then you must have bigger one and also add some pipelines there too.
Of course you must monitor those and if there are lack of resources or too much delays with event forwarding then add more HFs or increase their size. Easiest this can do with adding those as indexers into MC and us MC to analysing those.
r. Ismo
Thanks for that. We are currently running c5n.9xl hosts (which are enormous). Using those certainly made a difference, but looking at their logs in AWS Console they are clearly under utilised.
I guess we are going to have to start to experiment with using fleets of more, but smaller, hosts to see how things go. It's a pity Splunk don't have a recommended machine size if literally all you are doing is forwarding - we need to run the http collector and the AWS Add-on to pull some S3 info, but even they are basically just acquiring and forwarding. No explicit indexing, no props and transforms, etc...
I just checking that we are using 2 x c5.large for IHF and also HEC and there is TA-AWS and TA-gcp running too. Daily ingesting is something like 150GB.
You should remember that if this is your 1st full splunk instance after UF then you must add those props and transforms there not in then indexers to take those into use!
I'd also settle for a slow response! 🙂