Going so nuts regarding same issues people have above.. bare metal we get 2500+ iops no problem on gen3 pci nvme enterprise ssd's... in a vm.. 700... using lxc container, same tests get around 1500 iops.. much better and that would be totally fantastic if we could get that in our vm's (every vm has it's own dedicated pcie nvme enterprise ssd attached to it - dell r640 / 512gb ram - dual xeon 3ghz 'gold' 72 cores)
.. serenity now .. i am at a loss ... i just cant figure out why we cant even get 50 percent of bare metal ... maddeningly frustrating....
p.s. - - got my hands on a dell r750 with 4th gen nvme ssd's ... host gets 11K+ IOPS while the guest gets about 2500... so that would work for our needs (we need above 1000 IOPS to support the db application we are attempting to run) but server costs go from 4k/each to 12k for the config and storage we need using gen4...
.. serenity now .. i am at a loss ... i just cant figure out why we cant even get 50 percent of bare metal ... maddeningly frustrating....
p.s. - - got my hands on a dell r750 with 4th gen nvme ssd's ... host gets 11K+ IOPS while the guest gets about 2500... so that would work for our needs (we need above 1000 IOPS to support the db application we are attempting to run) but server costs go from 4k/each to 12k for the config and storage we need using gen4...
Last edited: