DRBD based storage and snapshot ?

Belokan

Active Member
Apr 27, 2016
155
16
38
Hello,

I've migrated my VMs to DRBD for storage redundancy purpose. So now, my disks are raw type instead of qcow2. The "take snapshot" button is grayed out because of that.

How can I do snapshot now ? Maybe directly @lvm2 level ?

Create snapshot:

pve1# lvcreate -L1G -s -n vm105snap /dev/drbdpool/vm-105-disk-1_00

Remove snapshot:

pve1# lvremove /dev/drbdpool/vm105snap

Restore snapshot:

vm105# init 0
pve1# lvconvert --merge /dev/drbdpool/vm105snap


Am I correct ? No contraindication ?

Thanks !
 
Last edited:
No, that should work - maybe you run an old version?

Hello dietmar,

My version is not "that old":

proxmox-ve: 4.2-64 (running kernel: 4.4.16-1-pve)
pve-manager: 4.2-18 (running version: 4.2-18/158720b9)
pve-kernel-4.4.6-1-pve: 4.4.6-48
pve-kernel-4.2.6-1-pve: 4.2.6-26
pve-kernel-4.4.8-1-pve: 4.4.8-52
pve-kernel-4.4.15-1-pve: 4.4.15-60
pve-kernel-4.4.16-1-pve: 4.4.16-64
lvm2: 2.02.116-pve3
corosync-pve: 2.4.0-1
libqb0: 1.0-1
pve-cluster: 4.0-44
qemu-server: 4.0-86
pve-firmware: 1.1-9
libpve-common-perl: 4.0-72
libpve-access-control: 4.0-19
libpve-storage-perl: 4.0-57
pve-libspice-server1: 0.12.8-1
vncterm: 1.2-1
pve-qemu-kvm: 2.6.1-2
pve-container: 1.0-73
pve-firewall: 2.0-29
pve-ha-manager: 1.0-33
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 2.0.4-1
lxcfs: 2.0.3-pve1
cgmanager: 0.39-pve1
criu: 1.6.0-1
novnc-pve: 0.5-8
zfsutils: 0.6.5.7-pve10~bpo80
drbdmanage: 0.97.3-1


DRDB looks healthy:

root@pve2:~# drbdadm status
.drbdctrl role:Secondary
volume:0 disk:UpToDate
volume:1 disk:UpToDate
pve1 role:Secondary
volume:0 peer-disk:UpToDate
volume:1 peer-disk:UpToDate

vm-105-disk-1 role: Primary
disk:UpToDate
pve1 role:Secondary
peer-disk:UpToDate

vm-107-disk-1 role: Primary
disk:UpToDate
pve1 role:Secondary
peer-disk:UpToDate

vm-110-disk-1 role:Secondary
disk:UpToDate
pve1 role: Primary
peer-disk:UpToDate

vm-111-disk-1 role: Primary
disk:UpToDate
pve1 role:Secondary
peer-disk:UpToDate

vm-112-disk-1 role:Secondary
disk:UpToDate
pve1 role: Primary
peer-disk:UpToDate


One VM config:

#Debian 8
#
#Test VM ...
bootdisk: ide0
cores: 1
cpuunits: 512
ide0: drbd1:vm-112-disk-1,cache=writethrough,size=32G
ide2: none,media=cdrom
memory: 512
name: vmtest
net0: e1000=82:04:F9:0B:63:BD,bridge=vmbr0
numa: 0
ostype: l26
smbios1: uuid=2b6c9ce9-0ad5-4e47-a525-244051a6919b
sockets: 1


Storage:

dir: local
path /var/lib/vz
content backup,vztmpl,images,rootdir,iso
maxfiles 0

nfs: ds412
server ds412.home
export /volume1/nfsproxmox
path /mnt/pve/ds412
options vers=3
content iso,rootdir,images,vztmpl,backup
maxfiles 5

nfs: ds415
server ds415.home
export /volume1/nfsproxmox
path /mnt/pve/ds415
options vers=3
content vztmpl,backup,iso,images,rootdir
maxfiles 5

iscsi: vcs415
portal 192.168.1.101
target iqn.home.com.synology:ds415.vcs
content images

iscsi: vcs412
portal 192.168.1.100
target iqn.home.com.synology:ds412.vcs
content images

drbd: drbd1
content images,rootdir
redundancy 2
nodes pve1,pve2


Here are 2 GUI screenshots for a DRBD/raw VM (vmserv) and from a NFS/qcow2 (vmmon):

http://belokan.info/divers/vm-drbd.jpg
http://belokan.info/divers/vm-nfs.jpg

Thanks in advance !
 
DRBD9 snapshots are not yet implemented (in PVE's storage abstraction layer).
 
Thanks Fabian.

Does that mean it will be implemented or there's no plan to do it yet ?
BTW, can I use the manual procedure I've described in my first post safely ?

Cheers.
 
Thanks Fabian.

Does that mean it will be implemented or there's no plan to do it yet ?
BTW, can I use the manual procedure I've described in my first post safely ?

Cheers.

there is currently no sensible (for PVE's storage abstraction) way to rollback to snapshots, until this is available in drbdmanage there won't be snapshot support for DRBD9 in PVE. note that DRBD9 in general is a technology preview in PVE!
 
@dendi: Thanks for the link ! I'm not used to drbdmanage/drbd9, I've only played a bit with 8 ...
@fabian: What's your feeling about the drbdmange's snapshot how-to linked by dendi ?
 
the question is - what do you want to achieve with the snapshots? drbdmanage does not offer rollback functionality - you can only restore a snapshot to a new resource.
 
Ah OK I did not catch the 'new resource only' thing when reading the doc...
I need basic snapshot in order to 'secure' an update or a software modification that could 'break something'.

Most of the time I create then delete snapshots, or rare occasions I need to restore.

What about this 'new resource' ? Is it easily swappable with the 'broken one' at PM's level?

Thanks.
 
What about this 'new resource' ? Is it easily swappable with the 'broken one' at PM's level?

No. If it were easily swappable, we would not wait for "drbdmanage rollback"..
 
Funny thing, in fact the "Take snapshot" is not grayed out as stated by dietmar, it just takes few second to become available. But it opens a box saying the feature is not available.
I have 2 solutions in case I need to secure an update:

1.- Backup the VM and restore it in case of problem
2.- Move DRBD/raw to NFS/qcow2 + snapshot and restore from snapshot (and move back to DRBD/raw at the end).
 
Hello,

I hope I can continue to post on this thread for question regarding DRDB9 ...

In the Datacenter/Storage view for my (2) DRBD nodes, the "Disk Usage %" for drbd1 is 70%. What is it based on ?

root@pve2:~# vgs
VG #PV #LV #SN Attr VSize VFree
drbdpool 1 9 0 wz--n- 320.00g 772.00m
pve 1 3 0 wz--n- 55.77g 6.87g

root@pve2:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
.drbdctrl_0 drbdpool -wi-ao---- 4.00m
.drbdctrl_1 drbdpool -wi-ao---- 4.00m
drbdthinpool drbdpool twi-aotz-- 319.00g 50.17 26.05
lvol0 drbdpool -wi-a----- 80.00m
vm-105-disk-1_00 drbdpool Vwi-aotz-- 32.01g drbdthinpool 100.00
vm-107-disk-1_00 drbdpool Vwi-aotz-- 32.01g drbdthinpool 100.00
vm-110-disk-1_00 drbdpool Vwi-aotz-- 32.01g drbdthinpool 100.00
vm-111-disk-1_00 drbdpool Vwi-aotz-- 32.01g drbdthinpool 100.00
vm-112-disk-1_00 drbdpool Vwi-aotz-- 32.01g drbdthinpool 100.00
data pve -wi-ao---- 28.28g
root pve -wi-ao---- 13.75g
swap pve -wi-ao---- 6.88g


root@pve1:~# drbdmanage list-volumes --groupby Size
WARNING:root:Could not read configuration file '/etc/drbdmanaged.cfg'
+---------------------------------------------------+
| Name | Vol ID | Size | Minor| | State |
+---------------------------------------------------+
| vm-105-disk-1 | 0 | 32 GiB | 104 | | ok |
| vm-107-disk-1 | 0 | 32 GiB | 102 | | ok |
| vm-110-disk-1 | 0 | 32 GiB | 101 | | ok |
| vm-111-disk-1 | 0 | 32 GiB | 103 | | ok |
| vm-112-disk-1 | 0 | 32 GiB | 100 | | ok |
+---------------------------------------------------+


The drbd resource is made of 2x320GB iSCSI targets (one on each node) and right now it hosts 5x 32GB VMs (so 50%). Where are the missing 20% ?

Thanks, have a nice day.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!