DRBD 9 setup Proxmox VE4

Hi,

If I follow the wiki (https://pve.proxmox.com/wiki/DRBD9) to setup DRBD9, the DRBD storage in the UI is only 3 MB. Also I can't put content on it.
What can I do about it? How to test/debug, etc?

Thanks in advance.

Regards,
Ruben
Hi,
how looks the output of following commands?
Code:
pvs
vgs
lvs
BTW. with DRBD8 it's recommended to use an resource for each node - don't get trouble in case of split brain.
In the DRBD9-wiki it's only one resource for all drbd-hosts... how looks a spit brain repair??

Udo
 
In the DRBD9-wiki it's only one resource for all drbd-hosts... how looks a spit brain repair??

I don't understand what you mean exactly. With DRBD9 and drbdmanage, we create one resource for each VM image.
 
Isn't the wiki wrong?
drbdmanage new-node pve2 192.168.15.82
Should that not use the dedicated nic?
drbdmanage new-node pve2 10.0.15.82
drbdmanage new-node pve3 10.0.15.83
 
With DRBD9 and drbdmanage, we create one resource for each VM image.
Great, so it works like "ganeti"! I have a question:
In the wiki you suggest "redundancy 3". This means that a VM is replicated to three nodes. Is it possible to set "redundancy 2"?
Will Proxmox decide automatically where to put the replica or I can/must select the "secondary" node?
And what happens with HA? If the node where the VM is running fails, will proxmox reboot the vm in the right node (where the replicated data is stored)?
And if I try to live migrate the VM to the third node where no replica is present?
Thank you :-)

 
In the wiki you suggest "redundancy 3". This means that a VM is replicated to three nodes. Is it possible to set "redundancy 2"?


yes


Will Proxmox decide automatically where to put the replica or I can/must select the "secondary" node?


yes (drbdmanage does that).


And what happens with HA? If the node where the VM is running fails, will proxmox reboot the vm in the right node (where the replicated data is stored)?


you can use HA groups to control that (if you want).

And if I try to live migrate the VM to the third node where no replica is present?

That is no problem.
 
I am having trouble as well. /23550-Error-I-O-error-while-accessing-persistent-configuration-storage[/url] Sorry to try and hijack, but I can't seem to respond to my own post there. Is the forum broken?
 
Thank you very much!
Just another question: do you think that is possible to have an extra copy of each VM (or only selected VMs) to a remote node via drbd async for disaster recovery?
 
Interesting possibilities!

To get back to my initial question (no drbd disk-space in Proxmox VE UI), first entry in this thread, I've collected some information.

I've setup a testlab with two proxmox4 nodes (pve01 and pve02) on VirtualBox.

@Udo, this is my configuration....


Code:
root@pve01:~# pvs
  PV         VG       Fmt  Attr PSize PFree
  /dev/sda3  pve      lvm2 a--  7.87g 892.00m
  /dev/sdb1  drbdpool lvm2 a--  4.00g 820.00m

Code:
root@pve01:~# vgs
  VG       #PV #LV #SN Attr   VSize VFree
  drbdpool   1   2   0 wz--n- 4.00g 820.00m
  pve        1   3   0 wz--n- 7.87g 892.00m

Code:
root@pve01:~# lvs
  LV           VG       Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  .drbdctrl    drbdpool -wi-ao----   4.00m
  drbdthinpool drbdpool twi-a-tz--   3.18g             0.00   1.17
  data         pve      -wi-ao----   4.38g
  root         pve      -wi-ao----   1.75g
  swap         pve      -wi-ao---- 896.00m

Code:
root@pve01:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        maxfiles 0
        content vztmpl,images,rootdir,iso

drbd: drbd1
        redundancy 2

Code:
root@pve01:~# drbdsetup status
.drbdctrl role:Secondary
  disk:UpToDate
  pve02 role:Secondary
    peer-disk:UpToDate

Code:
root@pve01:~# drbdmanage list-nodes
+------------------------------------------------------------------------------------------------------------+
| Name  | Pool Size | Pool Free |                                                                    | State |
+------------------------------------------------------------------------------------------------------------+
| pve01 |      3264 |      3225 |                                                                    |    ok |
| pve02 |      3264 |      3225 |                                                                    |    ok |
+------------------------------------------------------------------------------------------------------------+

Code:
root@pve01:~# drbdsetup show
resource .drbdctrl {
    _this_host {
        node-id                 0;
        volume 0 {
            device                      minor 0;
            disk                        "/dev/drbdpool/.drbdctrl";
            meta-disk                   internal;
        }
    }
    connection {
        _peer_node_id 1;
        _this_host ipv4 192.168.1.200:6999;
        _remote_host ipv4 192.168.1.201:6999;
        net {
            cram-hmac-alg       "sha256";
            shared-secret       "Md2DiokcLqJk94qWzEBD";
            _name               "pve02";
        }
    }
}

I think the problem is that Proxmox doesn't "understand" the "drbd:" entry in the config file /etc/pve/storage.cfg.
According to the wiki (https://pve.proxmox.com/wiki/DRBD9) this is all required.

Any ideas?


Regards,
Ruben
 
Last edited:
Interesting possibilities!

To get back to my initial question (no drbd disk-space in Proxmox VE UI), first entry in this thread, I've collected some information.

I've setup a testlab with two proxmox4 nodes (pve01 and pve02) on VirtualBox.

@Udo, this is my configuration....


Code:
root@pve01:~# pvs
  PV         VG       Fmt  Attr PSize PFree
  /dev/sda3  pve      lvm2 a--  7.87g 892.00m
  /dev/sdb1  drbdpool lvm2 a--  4.00g 820.00m

Code:
root@pve01:~# vgs
  VG       #PV #LV #SN Attr   VSize VFree
  drbdpool   1   2   0 wz--n- 4.00g 820.00m
  pve        1   3   0 wz--n- 7.87g 892.00m

Code:
root@pve01:~# lvs
  LV           VG       Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  .drbdctrl    drbdpool -wi-ao----   4.00m
  drbdthinpool drbdpool twi-a-tz--   3.18g             0.00   1.17
  data         pve      -wi-ao----   4.38g
  root         pve      -wi-ao----   1.75g
  swap         pve      -wi-ao---- 896.00m

Code:
root@pve01:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        maxfiles 0
        content vztmpl,images,rootdir,iso

drbd: drbd1
        redundancy 2

Code:
root@pve01:~# drbdsetup status
.drbdctrl role:Secondary
  disk:UpToDate
  pve02 role:Secondary
    peer-disk:UpToDate

Code:
root@pve01:~# drbdmanage list-nodes
+------------------------------------------------------------------------------------------------------------+
| Name  | Pool Size | Pool Free |                                                                    | State |
+------------------------------------------------------------------------------------------------------------+
| pve01 |      3264 |      3225 |                                                                    |    ok |
| pve02 |      3264 |      3225 |                                                                    |    ok |
+------------------------------------------------------------------------------------------------------------+

Code:
root@pve01:~# drbdsetup show
resource .drbdctrl {
    _this_host {
        node-id                 0;
        volume 0 {
            device                      minor 0;
            disk                        "/dev/drbdpool/.drbdctrl";
            meta-disk                   internal;
        }
    }
    connection {
        _peer_node_id 1;
        _this_host ipv4 192.168.1.200:6999;
        _remote_host ipv4 192.168.1.201:6999;
        net {
            cram-hmac-alg       "sha256";
            shared-secret       "Md2DiokcLqJk94qWzEBD";
            _name               "pve02";
        }
    }
}

I think the problem is that Proxmox doesn't "understand" the "drbd:" entry in the config file /etc/pve/storage.cfg.
According to the wiki (https://pve.proxmox.com/wiki/DRBD9) this is all required.

Any ideas?


Regards,
Ruben
Hi Ruben,
just installed an testcluster an do the same... work for me!
But the displayed Size is wrong!
It's shows 1.75GB but it's 1.75 TB.

drbd.png
Code:
root@proxtest1:~# drbdsetup status
.drbdctrl role:Secondary
  disk:UpToDate
  proxtest3 role:Secondary
    peer-disk:UpToDate

vm-100-disk-1 role:Primary
  disk:UpToDate
  proxtest3 role:Secondary
    replication:SyncSource peer-disk:Inconsistent done:92.11
Udo
 
Hi Udo,

Thank you for trying.
That makes sense. In my case MBs should be GBs!

I see that you've created a VPS of type VM (harddisk type: raw disk image). That works for me too!

Now I understand what went wrong. I tried to create LXC containers on the DRBD storage, but I couldn't select it in the container-wizard-UI. Then I thought it has something to do with the small space on the device, but that's clarified now. But, according to http://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_4.0_beta2 it should be possible to put LXC containers on DRBD9 storage.
It would be fabulous if that's working in the future also (hope soon)!

Thanks for all!

Regards,
Ruben
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!