Deployment Architecture

Hardware sizing for Accelerate data models-- Is there a tool that helps in sizing a server?

linspec9721
Explorer

Hello folks,

there is a tool that helps in sizing a server that will work with accelerate data models ?

Or wich is the best way to achieve that goal?

It seams that splunk base configuration 12cpu/12gb of ram is not enoght.

Thank you all.

Labels (1)
0 Karma
1 Solution

gcusello
SplunkTrust
SplunkTrust

Hi @linspec9721,

in this case the hardware reference starts from the minimum you know (12 CPUs and 12 RAM) and groths if you have many users and/or many scheduled searches (e.g. if you have Splunk Security Essentials App).

My hint is to start with the hardware reference and monitor your Splunk environment to see if there are peaks that require more resources:

  • if you're using a virtual environment isn't a problem to add more resources,
  • if instead you have a physical environment you should make a test period to understand your load, and anyway you can scalate your architetture adding other machines or you could start with an enlarged configuration.

Ciao.

Giuseppe

View solution in original post

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @linspec9721,

the storage required for accelerated Data Model, for one year is around

data indexed per day * 3.4

Ciao.

Giuseppe

0 Karma

linspec9721
Explorer

Hello @gcusello.

And regarding CPU and RAM sizing? Default 12/12 configuration it seems not enought.

Thank you.

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @linspec9721,

are you speaking of ES or ITSI?

in these cases there are different configurations.

If instead you're speaking of Splunk Enterprise, the CPUs and RAM depend on the users, datamodel acceleration shouldn't give problems, obviously if you have many users that use datamodels, you need more resources.

Ciao.

Giuseppe

linspec9721
Explorer

Hello @gcusello ,

I was speaking about Enterprise version.

Thank you

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @linspec9721,

in this case the hardware reference starts from the minimum you know (12 CPUs and 12 RAM) and groths if you have many users and/or many scheduled searches (e.g. if you have Splunk Security Essentials App).

My hint is to start with the hardware reference and monitor your Splunk environment to see if there are peaks that require more resources:

  • if you're using a virtual environment isn't a problem to add more resources,
  • if instead you have a physical environment you should make a test period to understand your load, and anyway you can scalate your architetture adding other machines or you could start with an enlarged configuration.

Ciao.

Giuseppe

0 Karma
Got questions? Get answers!

Join the Splunk Community Slack to learn, troubleshoot, and make connections with fellow Splunk practitioners in real time!

Meet up IRL or virtually!

Join Splunk User Groups to connect and learn in-person by region or remotely by topic or industry.

Get Updates on the Splunk Community!

Index This | What travels the world but is also stuck in place?

April 2026 Edition  Hayyy Splunk Education Enthusiasts and the Eternally Curious!   We’re back with this ...

Discover New Use Cases: Unlock Greater Value from Your Existing Splunk Data

Realizing the full potential of your Splunk investment requires more than just understanding current usage; it ...

Continue Your Journey: Join Session 2 of the Data Management and Federation Bootcamp ...

As data volumes continue to grow and environments become more distributed, managing and optimizing data ...