[SOLVED] My 64GB disk became 250GB after moving it

T-Grave

New Member
Mar 18, 2020
5
1
3
32
Hi


The context
My proxmox installation has 3 different ZFS pools configured.
All of the different datasets I use below have been created inside the Proxmox Datacenter page, under the Storage tab.

I have a VM that has a total of 3 disks attached to it:

- scsi0 / 40GB / local-zfs
- scsi1 / 64GB / volumes-hdd
- scsi2 / 250GB / volumes-hdd

This VM is being used as a docker host, with the two last disks being used for persistent storage for certain containers.

A while back, I upgraded my proxmox host with some more SSD storage, allowing me to create a new storage path: volumes-nvme.

I noticed recently, the docker host wasn't using this yet, everything on the 64GB disk image is random I/O, so I thought: let's move the disk to the new nvme volume for better performance.

Do note: I've done this Move disk operation for at least 10 other disk images / container volumes before, always worked without any issues.

The problem
After the move operation completed, the proxmox interface showed me the following new list of hardware:

- scsi0 / 40GB / local-zfs
- scsi1 / 250GB / volumes-nvme
- scsi2 / 250GB / volumes-hdd

As you can see, the second disk (the one I moved from one storage to another) is now 250GB :eek:

A bit confused, I started the machine; all my data is still there and the guest OS still reports as if I have a 64GB disk (both the FS + parted are giving me this information).

When I run zfs list on the host, it reports a usage of 258GB for the dataset that has been created to store the disk image.

I could very easily create a new disk, transfer data over and remove the old "oversized" disk, but I'm interested in (and thoroughly confused by) what happened.

Is there anyone who can shed a light on what just happened? o_O
 
you should have a task log entry for 'Move Disk' Task - could you post the full log output? was it an Offline or Online Move Disk?
 
you should have a task log entry for 'Move Disk' Task - could you post the full log output? was it an Offline or Online Move Disk?

This was an offline Move Disk.

Attached you can find the complete log.


Edit

The plot thickens, so here is a screenshot of the current disks listed by Proxmox in the Hardware tab of the VM:

1584564262354.png

But when I go over to the disk-images storage, I'd expect to see the 250GB volume, since the move it says it's 64GB o_O
1584564387322.png
 

Attachments

  • move-disk.txt
    11.1 KB · Views: 4
Last edited:
some background: the 'size' property in the config is read-only and informational, for any action actually doing something the size of the actual volume counts. it's possible for the two to get out of sync by manual actions (e.g., editing the config, or manually resizing a volume behind PVE's back).

did you or another admin modify either manually (possibly at some point in the past)? it sounds like config and reality were switched around - the 64GB volume is tagged as 250GB in the config, and the 250GB one was tagged as 64GB (which got updated when moving to reflect the actual size). I am fairly confident that there is nothing in the PVE code base that could lead to such a switch (we rarely ever modify multiple disks at once), but it could be some rare bug as well. could you provide more (task) logs related to that VM if you still have them? e.g., creation, ...
 
Thanks for your reply! That does bring some more clarity to the whole issue.

These drives were originally created for a different guest VM. A couple of months ago I decided I wanted to use a different OS for my docker host,so I created new VM's and moved these disks over manually using the CLI (or at least partly, to do some of the renaming so the ID's would match etc)

Chances are I switched something up there.

So, as for a resolution; I can simply perform a move on the correct disk and it will re-tag itself with the correct size?
 
Thanks for your reply! That does bring some more clarity to the whole issue.

These drives were originally created for a different guest VM. A couple of months ago I decided I wanted to use a different OS for my docker host,so I created new VM's and moved these disks over manually using the CLI (or at least partly, to do some of the renaming so the ID's would match etc)

Chances are I switched something up there.

So, as for a resolution; I can simply perform a move on the correct disk and it will re-tag itself with the correct size?

yes. you can also do qm rescan --vmid XXX to update the size info (and add any currently unreferenced disks as unused disks to the config) first.
 
Awesome, thanks for the tip!

Code:
root@deimos:~# qm rescan --vmid 101
rescan volumes...
VM 101: size of disk 'disk-images:vm-101-disk-2' (scsi2) updated from 250G to 64G

Quite certain it wasn't a bug, but rather something I messed up a while back :p

Consider my issue resolved, thanks for all the help!
 
  • Like
Reactions: fabian

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!