[SOLVED] Cluster Issue.

fbtanner3

New Member
Jul 19, 2023
6
1
3
I have set up a two-node cluster, using a Raspberry Pi as a q-device. The issue that I am running into is that I have set up both hosts with the same configuration. I store my virtual guests on an LVM-Thin volume. The issue is that when I try to migrate the guests, the guests volume is not showing up on the other node. It shows up if I go into the host configuration LVM-Thin, but it does not show up in the migration drop-down. Nor does it show up if I expand the host. I have attached a screen shot of what I am seeing when I browse the hosts.

EDITED TO ADD: They are not using shared storage.

Thank you for any assistance.
 

Attachments

  • Untitled.png
    Untitled.png
    94.4 KB · Views: 7
Hi,
did you maybe just select the local storage instead when migrating and the disks are now on there?

Otherwise, please share the output of pveversion -v from both nodes, the VM configuration qm config <ID> --current, the storage configuration cat /etc/pve/storage.cfg and a full migration task log.
 
Hi,
did you maybe just select the local storage instead when migrating and the disks are now on there?

Otherwise, please share the output of pveversion -v from both nodes, the VM configuration qm config <ID> --current, the storage configuration cat /etc/pve/storage.cfg and a full migration task log.
There was no option to select the 'guests' storage on the remote node. In fact, if you look in the left-hand pane, the 'guests' storage isn't even showing up under the host. Just in the LVM-Thin.

Node 1:
Linux vm-hst-sa01 6.2.16-6-pve #1 SMP PREEMPT_DYNAMIC PMX 6.2.16-7 (2023-08-01T11:23Z) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Mon Aug 14 08:26:41 MST 2023 from 10.8.4.2 on pts/0
root@vm-hst-sa01:~# pveversion -v
proxmox-ve: 8.0.2 (running kernel: 6.2.16-6-pve)
pve-manager: 8.0.4 (running version: 8.0.4/d258a813cfa6b390)
pve-kernel-6.2: 8.0.5
proxmox-kernel-helper: 8.0.3
proxmox-kernel-6.2.16-6-pve: 6.2.16-7
proxmox-kernel-6.2: 6.2.16-7
pve-kernel-6.2.16-5-pve: 6.2.16-6
pve-kernel-6.2.16-4-pve: 6.2.16-5
pve-kernel-6.2.16-3-pve: 6.2.16-3
ceph-fuse: 17.2.6-pve1+3
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-3
libknet1: 1.25-pve1
libproxmox-acme-perl: 1.4.6
libproxmox-backup-qemu0: 1.4.0
libproxmox-rs-perl: 0.3.1
libpve-access-control: 8.0.4
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.0.7
libpve-guest-common-perl: 5.0.4
libpve-http-server-perl: 5.0.4
libpve-rs-perl: 0.8.5
libpve-storage-perl: 8.0.2
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve3
novnc-pve: 1.4.0-2
proxmox-backup-client: 3.0.2-1
proxmox-backup-file-restore: 3.0.2-1
proxmox-kernel-helper: 8.0.3
proxmox-mail-forward: 0.2.0
proxmox-mini-journalreader: 1.4.0
proxmox-widget-toolkit: 4.0.6
pve-cluster: 8.0.3
pve-container: 5.0.4
pve-docs: 8.0.4
pve-edk2-firmware: 3.20230228-4
pve-firewall: 5.0.3
pve-firmware: 3.7-1
pve-ha-manager: 4.0.2
pve-i18n: 3.0.5
pve-qemu-kvm: 8.0.2-4
pve-xtermjs: 4.16.0-3
qemu-server: 8.0.6
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.1.12-pve1

Node 2:
Linux vm-hst-sa02 6.2.16-3-pve #1 SMP PREEMPT_DYNAMIC PVE 6.2.16-3 (2023-06-17T05:58Z) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Fri Aug 11 10:32:08 MST 2023 on pts/0
root@vm-hst-sa02:~# pveversion -v
proxmox-ve: 8.0.1 (running kernel: 6.2.16-3-pve)
pve-manager: 8.0.3 (running version: 8.0.3/bbf3993334bfa916)
pve-kernel-6.2: 8.0.2
pve-kernel-6.2.16-3-pve: 6.2.16-3
ceph-fuse: 17.2.6-pve1+3
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-3
libknet1: 1.25-pve1
libproxmox-acme-perl: 1.4.6
libproxmox-backup-qemu0: 1.4.0
libproxmox-rs-perl: 0.3.0
libpve-access-control: 8.0.3
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.0.5
libpve-guest-common-perl: 5.0.3
libpve-http-server-perl: 5.0.3
libpve-rs-perl: 0.8.3
libpve-storage-perl: 8.0.2
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve3
novnc-pve: 1.4.0-2
proxmox-backup-client: 3.0.1-1
proxmox-backup-file-restore: 3.0.1-1
proxmox-kernel-helper: 8.0.2
proxmox-mail-forward: 0.2.0
proxmox-mini-journalreader: 1.4.0
proxmox-widget-toolkit: 4.0.5
pve-cluster: 8.0.1
pve-container: 5.0.4
pve-docs: 8.0.4
pve-edk2-firmware: 3.20230228-4
pve-firewall: 5.0.2
pve-firmware: 3.7-1
pve-ha-manager: 4.0.2
pve-i18n: 3.0.4
pve-qemu-kvm: 8.0.2-3
pve-xtermjs: 4.16.0-3
qemu-server: 8.0.6
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.1.12-pve1

This is what I get when I try to run a qm config:
root@vm-hst-sa02:~# qm config 2 --current
400 Parameter verification failed.
vmid: invalid format - value does not look like a valid VM ID

Node 1:
root@vm-hst-sa02:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content backup,iso,vztmpl

lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir

lvmthin: guests
thinpool guests
vgname guests
content images,rootdir
nodes vm-hst-sa02

Node 2:
root@vm-hst-sa01:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content backup,iso,vztmpl

lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir

lvmthin: guests
thinpool guests
vgname guests
content images,rootdir
nodes vm-hst-sa02
 
As you can see by this screenshot, when I try to migrate, it doesn't even show the 'guests' on the target host as an available storage target. I am wondering if it has something to do with the 'guests' not showing up under the host in the left-hand pane.
 

Attachments

  • Untitled2.png
    Untitled2.png
    127.1 KB · Views: 4
lvmthin: guests
thinpool guests
vgname guests
content images,rootdir
nodes vm-hst-sa02

Remove the restriction to the node: vm-hst-sa02 from the storage configuration under: "Datacenter" -> "Storage" -> "guests" -> "Edit".
 
That looks like it did it. I will try a migration as soon as I get a chance and reply to the thread.
 
That solved it. It is migrating guests just fine now.

Different note. Is there a way to "migrate" the primary web interface to be on node 1 vs node 2 where it currently is?
 
Is there a way to "migrate" the primary web interface to be on node 1 vs node 2 where it currently is?

What do you mean exactly?

In general: In a PVE-cluster, there is no primary/main node; all are equal.

Sidenote: You can access the webui on every node in the cluster and see/manage the whole cluster.
 
What do you mean exactly?

In general: In a PVE-cluster, there is no primary/main node; all are equal.

Sidenote: You can access the webui on every node in the cluster and see/manage the whole cluster.
You answered my question . Thank you.
 
  • Like
Reactions: Neobin

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!