Hi, New to forum, been running proxmox no-subscription since version 3.4, i wanted to ask generally what are the future plans and upcoming improvements in drbd9 integration with pve 4 ?
for testing i'm running 8x old 250GB seagate barracuda's in 2 disk zfs raid0 vdevs over 4 nodes, which are old intel Q45 chipset with sata2 interfaces. on each node i'm running DRBD9 on sparse zvols with 64KB blocksize, zfs arc limited to 512MB on the host. each node has a single gigabit intel nic interface for drbd.
i've done some initial benchmarking using iozone via the excellent phoronix / openbenchmark.org test suite, and found that so far using XFS (noatime,relatime) on the VM with qemu directsync mode works very well, i get 300MB/s reads and 90MB/s writes. (DRBD9 running protocol c, the default) With ext4 and nocache i can get read spikes to 400MB/s but write speeds are below 40MB/s.
The best thing is i have discard support from the vm all the way down to the zvol level, running fstrim in the VM will shrink the drdb9 lvm thinpool, and the size of the zvol in realtime, which is pretty awesome. Not to mention the fact i can define replication levels individually for each VM.
What is the current status of drbd in the testing repo's?
regards, mike
edit: forgot to say that using qemu writethrough cache, which is often recommended, has a huge performance penalty with zvol backed drbd9 pools, reads of 100MB/s and writes of about 10MB/s.
for testing i'm running 8x old 250GB seagate barracuda's in 2 disk zfs raid0 vdevs over 4 nodes, which are old intel Q45 chipset with sata2 interfaces. on each node i'm running DRBD9 on sparse zvols with 64KB blocksize, zfs arc limited to 512MB on the host. each node has a single gigabit intel nic interface for drbd.
i've done some initial benchmarking using iozone via the excellent phoronix / openbenchmark.org test suite, and found that so far using XFS (noatime,relatime) on the VM with qemu directsync mode works very well, i get 300MB/s reads and 90MB/s writes. (DRBD9 running protocol c, the default) With ext4 and nocache i can get read spikes to 400MB/s but write speeds are below 40MB/s.
The best thing is i have discard support from the vm all the way down to the zvol level, running fstrim in the VM will shrink the drdb9 lvm thinpool, and the size of the zvol in realtime, which is pretty awesome. Not to mention the fact i can define replication levels individually for each VM.
What is the current status of drbd in the testing repo's?
regards, mike
edit: forgot to say that using qemu writethrough cache, which is often recommended, has a huge performance penalty with zvol backed drbd9 pools, reads of 100MB/s and writes of about 10MB/s.
Last edited: