Archive

Calculating IOPS from bonnie++ results

Path Finder

Hello,

I was wondering how to obtain IOPS from bonnie++ results.

The various executions of bonnie++ have been done using a size of twice the RAM, and not the root user.

Here are the results:

# bonnie++ -d /data -s 12g -qfb
1.97,1.97,host28,1,1360870223,12G,,,,434320,86,116822,22,,,116187,9,1362,32,16,,,,,15457,27,+++++,+++,2854,9,32642,45,+++++,+++,1624,6,,212ms,158ms,,178ms,818ms,125ms,1310us,234ms,103ms,53424us,182ms

# bonnie++ -d /data -s 16g -qfb
1.97,1.97,host56,1,1360869078,16G,,,,392047,38,133174,5,,,363574,5,586.2,8,16,,,,,+++++,+++,+++++,+++,2679,1,+++++,+++,+++++,+++,2700,1,,226ms,289ms,,58929us,276ms,118us,76us,61321us,198us,77us,21293us

# bonnie++ -d /data -s 32g -qfb
1.97,1.97,host57,1,1360852635,32G,,,,505977,54,151226,3,,,254450,3,257.0,8,16,,,,,+++++,+++,+++++,+++,2748,1,+++++,+++,+++++,+++,2602,1,,226ms,438ms,,373ms,1261ms,217us,84us,107ms,131us,31us,65746us

# bonnie++ -d /data -s 16g -qfb
 1.97,1.97,host58,1,1360881644,16G,,,,478704,47,129047,4,,,330770,4,480.6,7,16,,,,,+++++,+++,+++++,+++,2195,2,+++++,+++,+++++,+++,2814,2,,373ms,389ms,,79935us,316ms,452us,82us,295ms,94us,169us,35213us

# bonnie++ -d /data -s 16g -qfb
1.97,1.97,host59,1,1360868560,16G,,,,396657,37,82812,1,,,246492,2,2800,43,16,,,,,+++++,+++,+++++,+++,2819,3,+++++,+++,+++++,+++,2761,1,,226ms,350ms,,90085us,70123us,582us,92us,61752us,135us,36us,13827us

Any help would be appreciated, as I just cannot find a way to translate those into the famed IOPS I am looking for.

The main idea here is to be able to find out if the 800 IOPS threshold is there or not on our future Splunk cluster.

Cheers,
Olivier

Tags (1)
1 Solution

Splunk Employee
Splunk Employee

Be sure to use an updated version of bonnie++:
http://www.coker.com.au/bonnie++/

If you're using a version of bonnie that ships with the OS, you will likely find odd results. "Random Seeks/s" is the metric you want to measure, in order to estimate IOPS. Also be sure to test with -s set to 3x - 10x (ideally) memory to ensure no caching.

View solution in original post

Engager

What you want to do is pass that input into bon_csv2txt. You should see a column named Random Seeks.

0 Karma

Path Finder

Note that the IOPS requirement for SPLUNK 6.0 is 1200

The reason to use bonnie++ for comparability with what SPLUNK does ie repeatability of results. There are better tools to use

Communicator

Please share the "better tools" with links, please. (Not wanting to start a flame war. I'm interested in other tools.)

Path Finder

It is some combined score of a sort, yes 🙂

0 Karma

Engager

Hi,

is the measure for "Random Seeks" a representation of "Read IOPS", "Write IOPS", or some kind of average of both?

Thanks,
--ivo

0 Karma

Engager

The default is to have the file size (the -s option) be twice the size of RAM which will usually be enough to break all caches. But if you need to test other things (like filesystem performance with big files) then feel free to make it as large as you wish.

When you use the -q option stdout gets the CSV format (as shown in the above question) and the human readable version goes to stderr. To convert the CSV version to human-readable use bon_csv2txt.

In the human readable form what you call "IOPS" is called "Random Seeks".

Communicator

+1 for telling those of us that have no clue, 'In the human readable form what you call "IOPS" is called "Random Seeks".'

Splunk Employee
Splunk Employee

Be sure to use an updated version of bonnie++:
http://www.coker.com.au/bonnie++/

If you're using a version of bonnie that ships with the OS, you will likely find odd results. "Random Seeks/s" is the metric you want to measure, in order to estimate IOPS. Also be sure to test with -s set to 3x - 10x (ideally) memory to ensure no caching.

View solution in original post

Splunk Employee
Splunk Employee

Per etbe's comment below, setting ram size to 3x-10x should be unnecessary.

0 Karma