[SOLVED] Warnings when starting migration after disk removal

floh

Active Member
Jul 19, 2018
62
5
28
Hello guys!

I'have removed a few physical disk from a Node where a zfs was configured. I've already removed this zfs-storage from Datacenter-->Storages.
But when I start a migration away from this node the following output appeares:
Code:
2020-01-29 09:50:44 use dedicated network address for sending migration traffic (10.50.0.21)

2020-01-29 09:50:44 starting migration of VM 115 to node 'srv-eg-proxmox1' (10.50.0.21)

  WARNING: Device /dev/sdm not initialized in udev database even after waiting 10000000 microseconds.

  WARNING: Device /dev/sdm not initialized in udev database even after waiting 10000000 microseconds.

  WARNING: Not using device /dev/sdj for PV vaaX3f-aKHZ-B5tz-jquy-52aQ-D0fL-LbrD0m.

  WARNING: PV vaaX3f-aKHZ-B5tz-jquy-52aQ-D0fL-LbrD0m prefers device /dev/sde because device was seen first.

  WARNING: Device /dev/sde has size of 7813971632 sectors which is smaller than corresponding PV size of 46883819151 sectors. Was device resized?

  One or more devices used as PVs in VG guest_images_lvm have changed sizes.

  WARNING: Device /dev/sdm not initialized in udev database even after waiting 10000000 microseconds.
[...]


Does anyone know if there is something left from the previous config?

Best regards,

Floh

PS: /dev/sdm and sdn are iSCSI storages which weren't touched and are configured on all nodes
here is the output of lsblk from that node:

Code:
sda                               8:0    0   1.8T  0 disk
sdb                               8:16   0   1.8T  0 disk
sdc                               8:32   0 558.9G  0 disk
├─sdc1                            8:33   0  1007K  0 part
├─sdc2                            8:34   0   512M  0 part /boot/efi
└─sdc3                            8:35   0 558.4G  0 part
  ├─pve-swap                    253:0    0     8G  0 lvm  [SWAP]
  └─pve-root                    253:1    0   516G  0 lvm  /
sdd                               8:48   0   3.7T  0 disk
sde                               8:64   0   3.7T  0 disk
sdf                               8:80   0   3.7T  0 disk
sdg                               8:96   0   3.7T  0 disk
sdh                               8:112  0   3.7T  0 disk
sdi                               8:128  0   3.7T  0 disk
sdj                               8:144  0   3.7T  0 disk
sdk                               8:160  0   3.7T  0 disk
sdl                               8:176  0     1G  0 disk
sdm                               8:192  0  34.8T  0 disk
├─SSD--Storage-vm--201--disk--0 253:2    0    64G  0 lvm
├─SSD--Storage-vm--326--disk--0 253:3    0   128G  0 lvm
├─SSD--Storage-vm--319--disk--0 253:4    0   128G  0 lvm
├─SSD--Storage-vm--124--disk--0 253:5    0    32G  0 lvm
├─SSD--Storage-vm--339--disk--0 253:6    0   128G  0 lvm
├─SSD--Storage-vm--104--disk--0 253:7    0    32G  0 lvm
├─SSD--Storage-vm--131--disk--0 253:8    0    32G  0 lvm
├─SSD--Storage-vm--209--disk--0 253:9    0   128G  0 lvm
├─SSD--Storage-vm--115--disk--0 253:10   0   128G  0 lvm
├─SSD--Storage-vm--204--disk--0 253:11   0    70G  0 lvm
├─SSD--Storage-vm--207--disk--0 253:12   0   128G  0 lvm
├─SSD--Storage-vm--109--disk--0 253:13   0    64G  0 lvm
├─SSD--Storage-vm--112--disk--0 253:14   0     8G  0 lvm
├─SSD--Storage-vm--111--disk--0 253:15   0     8G  0 lvm
├─SSD--Storage-vm--122--disk--0 253:16   0     8G  0 lvm
├─SSD--Storage-vm--114--disk--0 253:17   0    16G  0 lvm
├─SSD--Storage-vm--330--disk--0 253:18   0   128G  0 lvm
├─SSD--Storage-vm--132--disk--0 253:19   0   200G  0 lvm
├─SSD--Storage-vm--119--disk--0 253:20   0     8G  0 lvm
├─SSD--Storage-vm--123--disk--0 253:21   0     8G  0 lvm
├─SSD--Storage-vm--321--disk--0 253:22   0   128G  0 lvm
├─SSD--Storage-vm--334--disk--0 253:23   0   128G  0 lvm
├─SSD--Storage-vm--137--disk--0 253:26   0    32G  0 lvm
├─SSD--Storage-vm--212--disk--0 253:27   0   128G  0 lvm
├─SSD--Storage-vm--110--disk--0 253:28   0    64G  0 lvm
└─SSD--Storage-vm--125--disk--0 253:29   0     8G  0 lvm
sdn                               8:208  0  34.8T  0 disk

sdd->sdk and sda+sdb are (now) unused disks
 
Last edited:
Only removing the storage in the GUI does not remove all ZFS related configuration. The commands "zpool status" and "zfs list" should still give some useful output. What version of Proxmox VE are you using (pveversion -v)?

Additionally, is there anything in kern.log? Did you see this thread already?
 
Thank you Dominic! You pushed me the right way. As zpool status and zfs list didn't output anything usefull i figured out that there wasn't any zfs config left. The problem was that there was a pv with an vg on it configured.

Now I've managed to remove the pv from the disk by using "pvremove --force /dev/sde --force" (yes, 2x --force). --> Problem solved.

Thanks
 
  • Like
Reactions: Dominic

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!