Deployment Architecture

Hardware sizing for Accelerate data models-- Is there a tool that helps in sizing a server?

linspec9721
Explorer

Hello folks,

there is a tool that helps in sizing a server that will work with accelerate data models ?

Or wich is the best way to achieve that goal?

It seams that splunk base configuration 12cpu/12gb of ram is not enoght.

Thank you all.

Labels (1)
0 Karma
1 Solution

gcusello
SplunkTrust
SplunkTrust

Hi @linspec9721,

in this case the hardware reference starts from the minimum you know (12 CPUs and 12 RAM) and groths if you have many users and/or many scheduled searches (e.g. if you have Splunk Security Essentials App).

My hint is to start with the hardware reference and monitor your Splunk environment to see if there are peaks that require more resources:

  • if you're using a virtual environment isn't a problem to add more resources,
  • if instead you have a physical environment you should make a test period to understand your load, and anyway you can scalate your architetture adding other machines or you could start with an enlarged configuration.

Ciao.

Giuseppe

View solution in original post

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @linspec9721,

the storage required for accelerated Data Model, for one year is around

data indexed per day * 3.4

Ciao.

Giuseppe

0 Karma

linspec9721
Explorer

Hello @gcusello.

And regarding CPU and RAM sizing? Default 12/12 configuration it seems not enought.

Thank you.

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @linspec9721,

are you speaking of ES or ITSI?

in these cases there are different configurations.

If instead you're speaking of Splunk Enterprise, the CPUs and RAM depend on the users, datamodel acceleration shouldn't give problems, obviously if you have many users that use datamodels, you need more resources.

Ciao.

Giuseppe

linspec9721
Explorer

Hello @gcusello ,

I was speaking about Enterprise version.

Thank you

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @linspec9721,

in this case the hardware reference starts from the minimum you know (12 CPUs and 12 RAM) and groths if you have many users and/or many scheduled searches (e.g. if you have Splunk Security Essentials App).

My hint is to start with the hardware reference and monitor your Splunk environment to see if there are peaks that require more resources:

  • if you're using a virtual environment isn't a problem to add more resources,
  • if instead you have a physical environment you should make a test period to understand your load, and anyway you can scalate your architetture adding other machines or you could start with an enlarged configuration.

Ciao.

Giuseppe

0 Karma
Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...