Hi
The context
My proxmox installation has 3 different ZFS pools configured.
All of the different datasets I use below have been created inside the Proxmox Datacenter page, under the Storage tab.
I have a VM that has a total of 3 disks attached to it:
- scsi0 / 40GB / local-zfs
- scsi1 / 64GB / volumes-hdd
- scsi2 / 250GB / volumes-hdd
This VM is being used as a docker host, with the two last disks being used for persistent storage for certain containers.
A while back, I upgraded my proxmox host with some more SSD storage, allowing me to create a new storage path: volumes-nvme.
I noticed recently, the docker host wasn't using this yet, everything on the 64GB disk image is random I/O, so I thought: let's move the disk to the new nvme volume for better performance.
Do note: I've done this Move disk operation for at least 10 other disk images / container volumes before, always worked without any issues.
The problem
After the move operation completed, the proxmox interface showed me the following new list of hardware:
- scsi0 / 40GB / local-zfs
- scsi1 / 250GB / volumes-nvme
- scsi2 / 250GB / volumes-hdd
As you can see, the second disk (the one I moved from one storage to another) is now 250GB
A bit confused, I started the machine; all my data is still there and the guest OS still reports as if I have a 64GB disk (both the FS + parted are giving me this information).
When I run
I could very easily create a new disk, transfer data over and remove the old "oversized" disk, but I'm interested in (and thoroughly confused by) what happened.
Is there anyone who can shed a light on what just happened?
The context
My proxmox installation has 3 different ZFS pools configured.
All of the different datasets I use below have been created inside the Proxmox Datacenter page, under the Storage tab.
I have a VM that has a total of 3 disks attached to it:
- scsi0 / 40GB / local-zfs
- scsi1 / 64GB / volumes-hdd
- scsi2 / 250GB / volumes-hdd
This VM is being used as a docker host, with the two last disks being used for persistent storage for certain containers.
A while back, I upgraded my proxmox host with some more SSD storage, allowing me to create a new storage path: volumes-nvme.
I noticed recently, the docker host wasn't using this yet, everything on the 64GB disk image is random I/O, so I thought: let's move the disk to the new nvme volume for better performance.
Do note: I've done this Move disk operation for at least 10 other disk images / container volumes before, always worked without any issues.
The problem
After the move operation completed, the proxmox interface showed me the following new list of hardware:
- scsi0 / 40GB / local-zfs
- scsi1 / 250GB / volumes-nvme
- scsi2 / 250GB / volumes-hdd
As you can see, the second disk (the one I moved from one storage to another) is now 250GB
A bit confused, I started the machine; all my data is still there and the guest OS still reports as if I have a 64GB disk (both the FS + parted are giving me this information).
When I run
zfs list
on the host, it reports a usage of 258GB for the dataset that has been created to store the disk image.I could very easily create a new disk, transfer data over and remove the old "oversized" disk, but I'm interested in (and thoroughly confused by) what happened.
Is there anyone who can shed a light on what just happened?