Splunk Search

How to use bonnie++ results to calculate IOPS?

NSCKevinSplunk
Engager

Hello,

 

I have install bonnie++  Ver 1.03e on Ubuntu 20.04.4, try to run Command bonnie++ , attached please fine the output screen shot.

 

May I know how to calculate or check the IOPS from this bonnie++ output ? should it be just last column > Random > 313.2 /sec ?  thank you !

I heard that we should have least IOPS 800 for splunk and ideally 1200 + for Splunk. 1.png

Labels (1)
0 Karma
1 Solution

PickleRick
SplunkTrust
SplunkTrust

Well, the IOPS is a very interesting metric because it can be sooooo skewed depending on where you measure it and what you take into account.

Yes, there is indeed the number of 800 "sustained IOPS" explicitly provided in https://docs.splunk.com/Documentation/Splunk/8.2.5/Capacity/Referencehardware but there is in no way a more specific method of measurement provided. The recommended measurement tool is FIO, as described in the community article referenced therein but it's not deemed as an "official requirement".

IOPS heavily depend on the request size as well as caching and request queue optimization (reordering) performed on various layers (OS, controller, disk) so it's a very confusing metric.

I'd suppose that due to the fact that Splunk, as any other "normal program" simply uses operating system calls to access files, you'd be mostly interested as "high level" IOPS - after all possible optimizations by all lower levels. It also means that the resulting IOPS may be sensitive to your OS configuration (like data and metadata caching and buffering and queueing algorithms as well as of course - in case of disk arrays - controller cache size and stripe size).

View solution in original post

PickleRick
SplunkTrust
SplunkTrust

Well, the IOPS is a very interesting metric because it can be sooooo skewed depending on where you measure it and what you take into account.

Yes, there is indeed the number of 800 "sustained IOPS" explicitly provided in https://docs.splunk.com/Documentation/Splunk/8.2.5/Capacity/Referencehardware but there is in no way a more specific method of measurement provided. The recommended measurement tool is FIO, as described in the community article referenced therein but it's not deemed as an "official requirement".

IOPS heavily depend on the request size as well as caching and request queue optimization (reordering) performed on various layers (OS, controller, disk) so it's a very confusing metric.

I'd suppose that due to the fact that Splunk, as any other "normal program" simply uses operating system calls to access files, you'd be mostly interested as "high level" IOPS - after all possible optimizations by all lower levels. It also means that the resulting IOPS may be sensitive to your OS configuration (like data and metadata caching and buffering and queueing algorithms as well as of course - in case of disk arrays - controller cache size and stripe size).

NSCKevinSplunk
Engager

Thank you very much for the explanation !

0 Karma

yuanliu
SplunkTrust
SplunkTrust

This is a bonnie++ question, not a Splunk question.  I assume that bonnie++ is some sort of benchmark tool.


I heard that we should have least IOPS 800 for splunk and ideally 1200 + for Splunk. 

I have never seen hard numbers in print.  You should ask the source what kind of metric/mix should meet 800 or 1200+, as performance is closely related to anticipated use.  A general discussion can be found in Reference hardware, perhaps in particular What storage type should I use for a role?

Also, this forum is about Splunk search.  Folks in Deployment Architecture and Installation may be more familiar with specific benchmarks.  Hope this helps.

0 Karma

NSCKevinSplunk
Engager

hi yuanliu,

 

thanks for the reply, the reason I post question here is that I am working on a SOC project by using Splunk, I need to generate the IOPS report to the SOC customer.

 

thank you for your reply. 

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @NSCKevinSplunk,

Let me know if we can help you more, otherwise, please accept an answer for the other people of Community.

Ciao and happy splunking

Giuseppe

P.S.: Karma Points are appreciated by all the contributors;-)

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @NSCKevinSplunk,

yes is the value you found:

Random > 313.2 /sec

Anyway, I always used Bonnie ++ but on internet I found this interestibng post about disck performances measurement https://serverfault.com/questions/517051/can-i-determine-iops-on-a-disk-array-using-bonnie and https://github.com/dataPhysicist/iops

Ciao.

Giuseppe

 

 

0 Karma

NSCKevinSplunk
Engager

@gcusello wrote:

Hi @NSCKevinSplunk,

yes is the value you found:

Random > 313.2 /sec

Anyway, I always used Bonnie ++ but on internet I found this interestibng post about disck performances measurement https://serverfault.com/questions/517051/can-i-determine-iops-on-a-disk-array-using-bonnie and https://github.com/dataPhysicist/iops

Ciao.

Giuseppe

 

 


Thank for reply and confirmation !

0 Karma
Get Updates on the Splunk Community!

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...