Disk resize problem

Ok the size is wrong here, because it should be already the absolut new size and still is 20GB.
So it seems the initial idea is still correct, the disk size can't be read, so let's check were this all starts. We'll need to add some more debug info.
Same procedure as before, but other file.
This time /usr/share/perl5/PVE/Storage/Plugin.pm it's around line 744.


Code:
sub file_size_info {
    my ($filename, $timeout) = @_;

    if (-d $filename) {
    return wantarray ? (0, 'subvol', 0, undef) : 1;
    }
    
    my $cmd = ['/usr/bin/qemu-img', 'info', $filename];

    my $format;
    my $parent;
    my $size = 0;
    my $used = 0;

    eval {
        ....
    };
    
    warn("DEBUG RESIZE:".$size);


    return wantarray ? ($size, $format, $used, $parent) : $size;
}

I've left out some code to keep it simple, just add the warn line directly above the return statement, don't forget to restart the service.
 
Hello, hope everybody had a nice weekend ;-)

here the output of what you requested:

Code:
Sep  9 09:01:11 pve5 pvedaemon[28454]: <admtpl@pve> end task UPID:pve5:00007751:0719D7A7:5D75F8A8:vncproxy:117:admtpl@pve: OK
Sep  9 09:01:26 pve5 pvedaemon[28453]: DEBUG RESIZE:0 at /usr/share/perl5/PVE/Storage/Plugin.pm line 744.
Sep  9 09:01:26 pve5 pvedaemon[28453]: <admtpl@pve> update VM 117: resize --disk scsi1 --size +20G
 
Sure thanks, hope you also :)
Now things get interesting please check what's the output for the following command:

/usr/bin/qemu-img info <path-to-qcow2>
 
Here is the output:
Code:
root@pve5:~# qemu-img info /mnt/pve/NFS01/images/117/vm-117-disk-0.qcow2
image: /mnt/pve/NFS01/images/117/vm-117-disk-0.qcow2
file format: qcow2
virtual size: 20G (21474836480 bytes)
disk size: 15G
cluster_size: 65536
Format specific information:
    compat: 1.1
    lazy refcounts: false
    refcount bits: 16
    corrupt: false
root@pve5:~#
 
Strange, really.

What's the output of:
pvesm path <volumeid>

volumeid should be: NFS01:117/vm-117-disk-0.qcow2
 
Ok, you do all cli commands as root, but the issue appears from the web interface as admtpl. Can you log in as root in the web interface and check if you still have that issue?
 
I see one message in the syslog when deleting the destroyed disks from the original VM:
Sep 9 10:35:03 pve5 pvedaemon[46164]: storage 'NFS01' is not online

The NFS server did not have any 'hickups', maybe some sort of a too short timeout in pvedaemon?
But I see only two lines in syslog for today and not related to time i resized the cloned VM.
 
Last edited:
Stop!

Repeated the importing of the disks and cloning the vm with 'root' user. This time it behaved erroneously as it did before.
 
Ok, can you please check if you can see any error when using the mentioned cli commands.
/usr/bin/qemu-img info <path-to-qcow2>
pvesm path <volumeid>

I know this is a little bit tedious, but this seems to be very inconsistent behavior from what you reported.
Please give it a try a few times or at least till we see some error or unexpected response.
 
Backing up the original VM, as always importing the disks one by one is a bit tedious ;-) It will take a while until it finishes.

But what I did, accidentaly I gave a wrong VM number to the 'pvesm path' command and it did not give an error, the VM ID I entered does not exist....
pvesm path NFS01:117/vm-104-disk-2.qcow2
 
Yeah that's because it is not checked if there is actually a file, it just generates the path for that storage. I just wanted to make sure that the path is generated correctly, but this shouldn't happen when not called directly as it is checked at a higher level.

I still think that the underlying problem is the NFS connection and in combination with the eval block the connection timeout is silently ignored which results in a disk size of zero. I've already prepared a patch to make sure that an unexpected reduction due to this will get caught and therefore will not destroy disks anymore. This is indeed an unpleasant and unwanted behavior, so thanks for debugging this with me.

I will prepare another patch which should change error handling in such cases to be more obvious, but I probably can't do very much about the root cause of your problem which is your NFS share who is sometimes not responding in time.
 
You're welcome. Thanks from my side for digging through this with me. :)

Seems strange to me that NFS server is the problem, as I tried on a second one and it gave the same troubles. Are there any recommended settings on the NFS server I could test,? Not that I have much to choose from as of the screenshot attached
 

Attachments

  • 2019-09-09_13h17_05.png
    2019-09-09_13h17_05.png
    3.7 KB · Views: 9
Last edited:
This will go through the normal release cycle and at the end will be available via the enterprise repository. Can't give you an exact date, but in the next weeks probably. Regarding your nfs server, I would debug the stack e.g. network, DNS etc. do some tests with iperf for example, check your switches, if you use any special setting e.g. jumbo frame make sure it's configured correctly.
 
Hi, i have de same problem, tried to resize a Windows 64G disk adding 10G and the disk resized to 10G so i lost everithing. If boot on live system, i can access de files but, does not boot anymore. The size of the disk still 64G but shows only 10G. Tried to rezise again, but the problem it's persistent.
 
This bug was fixed a couple of months ago, which version are you using?

# pveversion -v
 
Old one.
proxmox-ve: 5.2-2 (running kernel: 4.15.17-1-pve)
pve-manager: 5.2-1 (running version: 5.2-1/0fcd7879)

trying to recover some data. No backup, corrupted data.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!