Host's ZFS storage shows all disks, not just its "own" disks?

cfnz

Member
Feb 12, 2019
11
5
23
56
I have added ZFS storage by going Datacenter -> Storage -> Add, choosing a zfs pool (in this case hdd-pool) and entering an Id (hdd-pool-test-16k) and entering a 16k block size (just experimenting with performance).

After that, when choosing the storage tree item under the Datacenter -> Node -> hdd-pool-test-16K -> VM Disks, it shows all the disks in hdd-pool. If I migrate a VM to this new storage, it shows in both the hdd-pool and this new storage node.

From the GUI I can not tell what storage a VM is actually in, the disk is listed in multiple storage nodes. In the command line, I can see that the volblocksize of the VM in the 16K storage is correct, and the others are all 8K.

Is this how it is supposed to work? I would have thought it would only show the disks in the storage node if they were created or migrated there, rather than showing all the disks in the root hdd-pool.

Does my system have issues or is this expected behaviour? (And maybe I should not create storage items in existing ZFS pools?)

Version pve-manager/7.0-11/63d82f4e

Thanks for any help
Colin
 
Last edited:
OK, this is more of an issue now... it seems that because each disk is listed in multiple storage folders, when migrating, it migrates multiple copies of the same image, one for each storage type added.

Is it safe to delete the storage node as created in the first post? I would not want to have all the disk images that are listed removed.)
 
Yes, I did check delete source, but I think the problem is worse than that.

The extra disks actually appears on the destination, not the source.

Here is the beginning of the log which shows the problem (after clearing all duplicate disk images - in zfs list, there is now only one disk image related to VM 116):
Code:
2023-07-10 15:33:46 use dedicated network address for sending migration traffic (10.64.9.102)
2023-07-10 15:33:46 starting migration of VM 116 to node 'hn-pve-2' (10.64.9.102)
2023-07-10 15:33:46 found local disk 'hdd-pool-test-16k:vm-116-disk-0' (via storage)
2023-07-10 15:33:46 found local disk 'hdd-pool-test-32k:vm-116-disk-0' (via storage)
2023-07-10 15:33:46 found local disk 'hdd-pool:vm-116-disk-0' (in current VM config)
2023-07-10 15:33:46 copying local disk images
...

So Proxmox thinks there are 3 disk images, one in each of the storage configs, but there is only one (confirmed by zfs list). So Proxmox sends over the same image 3 times, after the first one it mentions a conflicting name on the destination, so creates a new numbered image, and continues. The result is that the migrated disk is configured with vm-116-disk-2, and disk-0 and disk-1 are now real disk images on the destination, but not connected to any VM.

Now, when migrating the same guest again, Proxmox thinks there are 9 related images:
Code:
2023-07-11 09:07:47 use dedicated network address for sending migration traffic (10.64.9.101)
2023-07-11 09:07:47 starting migration of VM 116 to node 'hn-pve-1' (10.64.9.101)
2023-07-11 09:07:47 found local disk 'hdd-pool-test-16k:vm-116-disk-0' (via storage)
2023-07-11 09:07:47 found local disk 'hdd-pool-test-16k:vm-116-disk-1' (via storage)
2023-07-11 09:07:47 found local disk 'hdd-pool-test-16k:vm-116-disk-2' (via storage)
2023-07-11 09:07:47 found local disk 'hdd-pool-test-32k:vm-116-disk-0' (via storage)
2023-07-11 09:07:47 found local disk 'hdd-pool-test-32k:vm-116-disk-1' (via storage)
2023-07-11 09:07:47 found local disk 'hdd-pool-test-32k:vm-116-disk-2' (via storage)
2023-07-11 09:07:47 found local disk 'hdd-pool:vm-116-disk-0' (via storage)
2023-07-11 09:07:47 found local disk 'hdd-pool:vm-116-disk-1' (via storage)
2023-07-11 09:07:47 found local disk 'hdd-pool:vm-116-disk-2' (in current VM config)
2023-07-11 09:07:47 copying local disk images

So you can see how this problem is going to get exponentially worse for repeated migrations of the same VM.
 
I have clicked on the storage nodes in question and chose to remove them from the datacenter... nothing broke, nothing was deleted that should not have been, so I think we are back to normal operation... phew.

So all good now :-)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!