pve zfs and drbd

zfs create -V size name
OK did these:
Code:
 zfs create -V 12G rpool/pro4test3    # make a new partition table and partition:   fdisk /dev/zvol/rpool/pro4test3 mkfs.ext3 /dev/zvol/rpool/pro4test3-part1    mkdir /mnt/pro4test3-part1    mount /dev/zvol/rpool/pro4test3-part1 /mnt/pro4test3-part1
Next question - is there a way to do the mount of a native zfs dataset automatically like the other zfs file systems? In mean time I'll mount at /etc/fstab PS: forum formatting is not working well. or changed.
 
after adding storage to pve, restore and start a vz backup, system startup is still slow. ntpdate hangs, then later /bin/sh /etc/rc2.d/S20dbus start . The issue could be networking setup on this host. I'll check that when on site.
 
after fixing networking issue the vz is working very fast . so we'll keep testing .

also 'zfs set mountpoint' does not seem to work for partition.
 
zfs set mountpoint is for datasets, not for zvols. If you want to mount a zvol, you need to format it and put it in fstab.
zfs create -V makes zvols
zfs create makes datasets.
 
openvz works fine so far on a formated zvol . Next is to try to use that with drbd . From my limited understanding using 'zvol on drbd' may not work with pve high availability. Due to the fact that pve uses LVM for that? I think zvol on drbd with manual fail over could work. Any thoughts / suggestions ?
 
Hello aciddrop, I'd like to double check, forgive the question - did you use zvols or datasets when you 1-st set things up? per redmop post below: zfs create -V makes zvols zfs create makes datasets. thanks for the help!
 
Hi. In order to use drbd you must have a block device below it, so in zfs case you can use only zvols for that purpose.Datasets cannot be used with drbd.
 
Hi. In order to use drbd you must have a block device below it, so in zfs case you can use only zvols for that purpose.Datasets cannot be used with drbd.
OK thanks, then as you already wrote: after doing that create the appropriate PV and VG on drbd resources as described in wiki .probably this will only work for kvm, not openvz .
 
Can't answer that since I dont use vz, but you can easily test it though.If vz works with lvm and drbd then it shouldn't matter if you have zfs or hardware raid below drbd.
 
Actually I have created zvols before zfs gui integration in proxmox, so I created them manually by using zfs commands to create zvols.Each zvol same size on both nodes.After creating zvols use drbd config to attach those zvols to separate resources(which corresponds to each vm disk).After doing that create the apropriate PV and VG on drbd resources as described in wiki.Finally attach VG via storage -> lvm to proxmox. It sounds complicated but once you do this some times you will get more familiar.
Hello, I've finally got a new test cluster ready to try drbd on top of zfs . And have a question. 1- I did this to make a zvol
Code:
 zfs create -V 12G rpool/pro4testdrbd
2- partitioned disk
Code:
 # partition 1 type  8e fdisk /dev/rpool/pro4testdrbd
3- then tried to use this in /etc/drbd.d/r0.res
Code:
                 device /dev/drbd0;                 disk /dev/rpool/pro4testdrbd-part1;
when I tried to start drbd got this error:
Code:
 sys3  ~ # /etc/init.d/drbd start Starting DRBD resources:[  r0 no suitable meta data found :( Command '/sbin/drbdmeta 0 v08 /dev/rpool/pro4testdrbd-part1 internal check-resize' terminated with exit code 255 drbdadm check-resize r0: exited with code 255 d(r0) 0: Failure: (119) No valid meta-data signature found.          ==> Use 'drbdadm create-md res' to initialize meta-data area.
 
I've made some tests with zfs+drbd and openvz containers.
It seems that openvz containers can be stored only on a ext3/ext4 formatted volume.
That means that you cannot use this drbd resource in dual/primary mode since that will corrupt ext3/ext4 file system on top of it.
If you need to use containers on a drbd resource, you must configure drbd in primary/secondary mode.
That means that only one node(the one which runs in primary mode) can mount this ext3/ext4 resource and run the containers.You cannot live migrate containers to the secondary node.
You can do only offline migrations, which means on the first node, stop all containers, unmount ext3/ext4 resource manually,change drbd resource to secondary mode.
Then connect to the second node,change drbd resource to primary mode, mount ext3/ext4 resource manually,start containers.
Obviously this is not ideal but at least it seems to work.
On the other hand you don't have these limitations with kvm machines. You can use drbd resources in dual primary mode + lvm on top and live migrate without issues.
 
Sorry for revival this topic, but i will tried install and configuration this
ZFS + DRBD + PVE + Fibre Channel

I will return the results.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!