NOT SURE IF IT IS THE RIGHT PLACE TO POST OR NOT. ALREADY POSTED ON LINBIT FORUM
Hello,
I have a 3 node cluster of proxmox and followed the following guide to establish HA controller.
1: How to Setup LINSTOR on Proxmox VE - LINBIT
2: https://linbit.com/drbd-user-guide/linstor-guide-1_0-en/#s-linstor_ha
3: https://linbit.com/drbd-user-guide/linstor-guide-1_0-en/#s-proxmox-ls-HA
Everything works, when one node goes down. Another node controller is being auto started by reactor daemon.
But only issue is that in the gui I don’t see the available storage capacity. But it appears again when the node 1 comes back. I can migrate vm/LXC without any issue from node 2 to node 3 in the meantime.
This is what happens when the controller node goes down. (pvetest1 was controller node first)
root@pvetest2:~# linstor node list
╭────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ Node ┊ NodeType ┊ Addresses ┊ State ┊
╞════════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ pvetest1 ┊ SATELLITE ┊ 192.168.102.201:3366 (PLAIN) ┊ OFFLINE (Auto-eviction: 2024-09-21 23:44:04) ┊
┊ pvetest2 ┊ SATELLITE ┊ 192.168.102.202:3366 (PLAIN) ┊ Online ┊
┊ pvetest3 ┊ SATELLITE ┊ 192.168.102.203:3366 (PLAIN) ┊ Online ┊
╰────────────────────────────────────────────────────────────────────────────────────────────────────╯
To cancel automatic eviction please consider the corresponding DrbdOptions/AutoEvict* properties on controller and / or node level
See 'linstor controller set-property --help' or 'linstor node set-property --help' for more details
image2244×1056 127 KB
root@pvetest2:~# drbd-reactorctl status
/etc/drbd-reactor.d/linstor_db.toml:
Promoter: Currently active on this node
● drbd-services@linstor_db.target
● ├─ drbd-promote@linstor_db.service
● ├─ var-lib-linstor.mount
● └─ linstor-controller.service
root@pvetest2:~# drbdadm status
linstor_db rolerimary
disk:UpToDate
pvetest1 connection:Connecting
pvetest3 role:Secondary
peer-disk:UpToDate
pm-c40c9e19 role:Secondary
disk:UpToDate
pvetest1 connection:Connecting
pvetest3 role:Secondary
peer-disk:UpToDate
pm-db8d36d7 role:Secondary
disk:UpToDate
pvetest1 connection:Connecting
pvetest3 role:Secondary
peer-disk:UpToDate
pm-e29692bb rolerimary
disk:UpToDate
pvetest1 connection:Connecting
pvetest3 role:Secondary
peer-disk:UpToDate
root@pvetest2:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content iso,vztmpl,backup
lvmthin: local-lvm
thinpool data
vgname pve
content rootdir,images
drbd: linstor_storage
content images, rootdir
controller 192.168.102.202,192.168.102.201,192.168.102.203
resourcegroup pve-rg
root@pvetest2:~# linstor storage-pool list
╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ StoragePool ┊ Node ┊ Driver ┊ PoolName ┊ FreeCapacity ┊ TotalCapacity ┊ CanSnapshots ┊ State ┊ SharedName ┊
╞══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ DfltDisklessStorPool ┊ pvetest1 ┊ DISKLESS ┊ ┊ ┊ ┊ False ┊ Warning ┊ pvetest1;DfltDisklessStorPool ┊
┊ DfltDisklessStorPool ┊ pvetest2 ┊ DISKLESS ┊ ┊ ┊ ┊ False ┊ Ok ┊ pvetest2;DfltDisklessStorPool ┊
┊ DfltDisklessStorPool ┊ pvetest3 ┊ DISKLESS ┊ ┊ ┊ ┊ False ┊ Ok ┊ pvetest3;DfltDisklessStorPool ┊
┊ pve-storage ┊ pvetest1 ┊ LVM_THIN ┊ linstor_vg/thinpool ┊ ┊ ┊ True ┊ Warning ┊ pvetest1;pve-storage ┊
┊ pve-storage ┊ pvetest2 ┊ LVM_THIN ┊ linstor_vg/thinpool ┊ 25.31 GiB ┊ 25.54 GiB ┊ True ┊ Ok ┊ pvetest2;pve-storage ┊
┊ pve-storage ┊ pvetest3 ┊ LVM_THIN ┊ linstor_vg/thinpool ┊ 25.31 GiB ┊ 25.54 GiB ┊ True ┊ Ok ┊ pvetest3;pve-storage ┊
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
WARNING:
Description:
No active connection to satellite 'pvetest1'
Details:
The controller is trying to (re-) establish a connection to the satellite. The controller stored the changes and as soon the satellite is connected, it will receive this update.
root@pvetest2:~#
root@pvetest2:~# pvesm list linstor_storage
Use of uninitialized value $owner in concatenation (.) or string at /usr/share/perl5/LINBIT/PluginHelper.pm line 65.
Use of uninitialized value $owner in concatenation (.) or string at /usr/share/perl5/LINBIT/PluginHelper.pm line 65.
Use of uninitialized value $owner in concatenation (.) or string at /usr/share/perl5/LINBIT/PluginHelper.pm line 65.
Use of uninitialized value $owner in concatenation (.) or string at /usr/share/perl5/LINBIT/PluginHelper.pm line 65.
Volid Format Type Size VMID
linstor_storagem-e29692bb_100 raw rootdir 8592252928 100
root@pvetest3:~# pvesm status
Name Type Status Total Used Available %
linstor_storage drbd active 0 0 0 0.00%
local dir active 6866092 4314200 2181820 62.83%
local-lvm lvmthin active 4972544 0 4972544 0.00%
Hello,
I have a 3 node cluster of proxmox and followed the following guide to establish HA controller.
1: How to Setup LINSTOR on Proxmox VE - LINBIT
2: https://linbit.com/drbd-user-guide/linstor-guide-1_0-en/#s-linstor_ha
3: https://linbit.com/drbd-user-guide/linstor-guide-1_0-en/#s-proxmox-ls-HA
Everything works, when one node goes down. Another node controller is being auto started by reactor daemon.
But only issue is that in the gui I don’t see the available storage capacity. But it appears again when the node 1 comes back. I can migrate vm/LXC without any issue from node 2 to node 3 in the meantime.
This is what happens when the controller node goes down. (pvetest1 was controller node first)
root@pvetest2:~# linstor node list
╭────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ Node ┊ NodeType ┊ Addresses ┊ State ┊
╞════════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ pvetest1 ┊ SATELLITE ┊ 192.168.102.201:3366 (PLAIN) ┊ OFFLINE (Auto-eviction: 2024-09-21 23:44:04) ┊
┊ pvetest2 ┊ SATELLITE ┊ 192.168.102.202:3366 (PLAIN) ┊ Online ┊
┊ pvetest3 ┊ SATELLITE ┊ 192.168.102.203:3366 (PLAIN) ┊ Online ┊
╰────────────────────────────────────────────────────────────────────────────────────────────────────╯
To cancel automatic eviction please consider the corresponding DrbdOptions/AutoEvict* properties on controller and / or node level
See 'linstor controller set-property --help' or 'linstor node set-property --help' for more details
image2244×1056 127 KB
root@pvetest2:~# drbd-reactorctl status
/etc/drbd-reactor.d/linstor_db.toml:
Promoter: Currently active on this node
● drbd-services@linstor_db.target
● ├─ drbd-promote@linstor_db.service
● ├─ var-lib-linstor.mount
● └─ linstor-controller.service
root@pvetest2:~# drbdadm status
linstor_db rolerimary
disk:UpToDate
pvetest1 connection:Connecting
pvetest3 role:Secondary
peer-disk:UpToDate
pm-c40c9e19 role:Secondary
disk:UpToDate
pvetest1 connection:Connecting
pvetest3 role:Secondary
peer-disk:UpToDate
pm-db8d36d7 role:Secondary
disk:UpToDate
pvetest1 connection:Connecting
pvetest3 role:Secondary
peer-disk:UpToDate
pm-e29692bb rolerimary
disk:UpToDate
pvetest1 connection:Connecting
pvetest3 role:Secondary
peer-disk:UpToDate
root@pvetest2:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content iso,vztmpl,backup
lvmthin: local-lvm
thinpool data
vgname pve
content rootdir,images
drbd: linstor_storage
content images, rootdir
controller 192.168.102.202,192.168.102.201,192.168.102.203
resourcegroup pve-rg
root@pvetest2:~# linstor storage-pool list
╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ StoragePool ┊ Node ┊ Driver ┊ PoolName ┊ FreeCapacity ┊ TotalCapacity ┊ CanSnapshots ┊ State ┊ SharedName ┊
╞══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ DfltDisklessStorPool ┊ pvetest1 ┊ DISKLESS ┊ ┊ ┊ ┊ False ┊ Warning ┊ pvetest1;DfltDisklessStorPool ┊
┊ DfltDisklessStorPool ┊ pvetest2 ┊ DISKLESS ┊ ┊ ┊ ┊ False ┊ Ok ┊ pvetest2;DfltDisklessStorPool ┊
┊ DfltDisklessStorPool ┊ pvetest3 ┊ DISKLESS ┊ ┊ ┊ ┊ False ┊ Ok ┊ pvetest3;DfltDisklessStorPool ┊
┊ pve-storage ┊ pvetest1 ┊ LVM_THIN ┊ linstor_vg/thinpool ┊ ┊ ┊ True ┊ Warning ┊ pvetest1;pve-storage ┊
┊ pve-storage ┊ pvetest2 ┊ LVM_THIN ┊ linstor_vg/thinpool ┊ 25.31 GiB ┊ 25.54 GiB ┊ True ┊ Ok ┊ pvetest2;pve-storage ┊
┊ pve-storage ┊ pvetest3 ┊ LVM_THIN ┊ linstor_vg/thinpool ┊ 25.31 GiB ┊ 25.54 GiB ┊ True ┊ Ok ┊ pvetest3;pve-storage ┊
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
WARNING:
Description:
No active connection to satellite 'pvetest1'
Details:
The controller is trying to (re-) establish a connection to the satellite. The controller stored the changes and as soon the satellite is connected, it will receive this update.
root@pvetest2:~#
root@pvetest2:~# pvesm list linstor_storage
Use of uninitialized value $owner in concatenation (.) or string at /usr/share/perl5/LINBIT/PluginHelper.pm line 65.
Use of uninitialized value $owner in concatenation (.) or string at /usr/share/perl5/LINBIT/PluginHelper.pm line 65.
Use of uninitialized value $owner in concatenation (.) or string at /usr/share/perl5/LINBIT/PluginHelper.pm line 65.
Use of uninitialized value $owner in concatenation (.) or string at /usr/share/perl5/LINBIT/PluginHelper.pm line 65.
Volid Format Type Size VMID
linstor_storagem-e29692bb_100 raw rootdir 8592252928 100
root@pvetest3:~# pvesm status
Name Type Status Total Used Available %
linstor_storage drbd active 0 0 0 0.00%
local dir active 6866092 4314200 2181820 62.83%
local-lvm lvmthin active 4972544 0 4972544 0.00%