Storage configuration between two nodes

leforban

Active Member
Jun 30, 2019
4
0
41
39
Hi,

I've been using Proxmox for a while now but never messed too much with the storage configuration. I recently added another node and created a cluster. I wanted to try to migrate a VM but it looks like I have a storage issue.

1599920096093.png

1599920316417.png

1599920336903.png

1599920238979.png

1599920259359.png

There is something I don't understand in the way this is supposed to work. If someone would be kind enough to tell me what I'm doing wrong, that would be great ! :)
 
why are both (BlueDrive (Trashcan) / Trashcan-SSD (Jabba)) trashcan having a questionmark on it?
more basic you have a problem with your pool - and not just with the migration
 
Thanks for making me notice that !
Ok there is progress. I think I've tried to do to many things at the same time without proper test and verification.

I've tried to bond to interfaces and use them as a bridge. My VM weren't able to access the network anymore... I came back from that configuration and will try that later. Now I've have this error when trying to migrate :
1599924241509.png

Are you supposed to have the exact same interfaces on both nodes ?
 
Last edited:
it says bridge 'vmbr2' does not exist?
so maybe you vm is still conected to this port vmbr2 but you removed the Ports to the bridge!
and changing things on ports and Bridge maybe a reboot would help too
 
Thanks, I created a vmbr2 in the same IP range and everything went smooth.

Then I tried rebooting the host and broke everything o_O


I have to do the " /sbin/rpool import -N 'rpool' " command on boot. I've tried :

- the rootdelay=10 in grub

- ZFS_INITRD_PRE_MOUNTROOT_SLEEP='5'+ ZFS_INITRD_POST_MODPROBE_SLEEP='5' and update-initramfs -u

- apt-get install --reinstall zfsutils-linux

- setting "ZPOOL_IMPORT_PATH" in /etc/default/zfs to "/dev/disk/by-vdev:/dev/disk/by-id" and regenerating the initramfs with "update-initramfs -u" to force mounting with IDs

- tried booting on the previous kernel



The ssd where my VM are stored (wich is called "BlueDrive") is available anymore.

Some images of my configurations

Here is the "pveversion -v" :

Code:
proxmox-ve: 6.2-1 (running kernel: 5.4.60-1-pve)

pve-manager: 6.2-11 (running version: 6.2-11/22fb4983)

pve-kernel-5.4: 6.2-6

pve-kernel-helper: 6.2-6

pve-kernel-5.3: 6.1-6

pve-kernel-5.4.60-1-pve: 5.4.60-2

pve-kernel-5.4.44-2-pve: 5.4.44-2

pve-kernel-4.15: 5.4-6

pve-kernel-5.3.18-3-pve: 5.3.18-3

pve-kernel-4.15.18-18-pve: 4.15.18-44

pve-kernel-4.15.18-12-pve: 4.15.18-36

ceph-fuse: 12.2.11+dfsg1-2.1+b1

corosync: 3.0.4-pve1

criu: 3.11-3

glusterfs-client: 5.5-3

ifupdown: residual config

ifupdown2: 3.0.0-1+pve2

ksm-control-daemon: 1.3-1

libjs-extjs: 6.0.1-10

libknet1: 1.16-pve1

libproxmox-acme-perl: 1.0.5

libpve-access-control: 6.1-2

libpve-apiclient-perl: 3.0-3

libpve-common-perl: 6.2-2

libpve-guest-common-perl: 3.1-3

libpve-http-server-perl: 3.0-6

libpve-storage-perl: 6.2-6

libqb0: 1.0.5-1

libspice-server1: 0.14.2-4~pve6+1

lvm2: 2.03.02-pve4

lxc-pve: 4.0.3-1

lxcfs: 4.0.3-pve3

novnc-pve: 1.1.0-1

proxmox-mini-journalreader: 1.1-1

proxmox-widget-toolkit: 2.2-12

pve-cluster: 6.1-8

pve-container: 3.2-1

pve-docs: 6.2-5

pve-edk2-firmware: 2.20200531-1

pve-firewall: 4.1-2

pve-firmware: 3.1-3

pve-ha-manager: 3.1-1

pve-i18n: 2.2-1

pve-qemu-kvm: 5.1.0-1

pve-xtermjs: 4.7.0-2

qemu-server: 6.2-14

smartmontools: 7.1-pve2

spiceterm: 3.1-1

vncterm: 1.6-2

zfsutils-linux: 0.8.4-pve1

I've already spent a few hours on this, it might by time to ask for help...

Thanks