I have added ZFS storage by going Datacenter -> Storage -> Add, choosing a zfs pool (in this case hdd-pool) and entering an Id (hdd-pool-test-16k) and entering a 16k block size (just experimenting with performance).
After that, when choosing the storage tree item under the Datacenter -> Node -> hdd-pool-test-16K -> VM Disks, it shows all the disks in hdd-pool. If I migrate a VM to this new storage, it shows in both the hdd-pool and this new storage node.
From the GUI I can not tell what storage a VM is actually in, the disk is listed in multiple storage nodes. In the command line, I can see that the volblocksize of the VM in the 16K storage is correct, and the others are all 8K.
Is this how it is supposed to work? I would have thought it would only show the disks in the storage node if they were created or migrated there, rather than showing all the disks in the root hdd-pool.
Does my system have issues or is this expected behaviour? (And maybe I should not create storage items in existing ZFS pools?)
Version pve-manager/7.0-11/63d82f4e
Thanks for any help
Colin
After that, when choosing the storage tree item under the Datacenter -> Node -> hdd-pool-test-16K -> VM Disks, it shows all the disks in hdd-pool. If I migrate a VM to this new storage, it shows in both the hdd-pool and this new storage node.
From the GUI I can not tell what storage a VM is actually in, the disk is listed in multiple storage nodes. In the command line, I can see that the volblocksize of the VM in the 16K storage is correct, and the others are all 8K.
Is this how it is supposed to work? I would have thought it would only show the disks in the storage node if they were created or migrated there, rather than showing all the disks in the root hdd-pool.
Does my system have issues or is this expected behaviour? (And maybe I should not create storage items in existing ZFS pools?)
Version pve-manager/7.0-11/63d82f4e
Thanks for any help
Colin
Last edited: