Our virtualization team has been less than honest with us in the past and have told us we have "fast" storage. We've requested 3,000 iops
Is there a Splunk query or monitoring console tile that can reveal the actual speed of our disks?
This particular environment is hosted on Windows so Bonnie++ won't work for us.
Even if not directly if we can use some queries based on index ingestion rate to determine that would be acceptable. perhaps a calculation to determine the ballpark speed
I need evidence to confront them even if it's just circumstantial
To get active in use io statistics you can use the
iostat command. This however doesn't give you the total iops (random seek iops) your disk is able to provide. In order to get that you need to use a tool such as bonnie++.
If you plan to use bonnie++ you can use it as follows:
bonnie++ -d [/your volume mountpoint] -s [twice your system RAM in MB] -u root:root -qfb
And you can use this splunk app to interpret the results from bonnie++ :
You may have missed the part where I mentioned this particular environment is on Windows so iostat and Bonnie++ won't work here.
I was thinking maybe there was a scientific way by knowing the amount of data /events pouring in vs the index time. Perhaps with all the pieces I could figure it out. For example if we had 200 iops storage vs. 2000 we should be able to estimate the difference some kind of way with the index data? Of course ballpark not exact estimates
Thanks for your input!
Thanks David, unfortunately I'm unable to run unapproved applications on my network. If it was an app within splunk I could though.
I appreciate your help on this and will continue to see what's out there. It's crazy I have to do all this because the virtualization team won't be honest smh
Ouch... Yeah... Well you could ask your VM team to show you io-usage from their side. It should be easy to display from the hypervisor.
Wont show you the total available though.. you'll need a benchmarking tool to test that.
Dave, Bonnie++ isn't the answer. I made mention of that in the original post.
I'm looking into the answer currently and it's going to involve math, the size of events, index rate and a number of metrics from various splunk logs to estimate.