Storage drive bricked!? Unable to mount a full disk

And content of your node's /etc/pve/storage.cfg?
dir: local
path /var/lib/vz
content vztmpl,backup,iso

lvmthin: local-lvm
thinpool data
vgname pve
content rootdir,images

dir: storage
path /mnt/pve/storage
content iso,rootdir,vztmpl,snippets,images,backup
is_mountpoint 1
nodes pve
shared 0
 
Maybe @waltar got this right already (apologies if I am slow), but from what I can see (this is summary for myself or someone else if they take over), maybe also you:

You do not have a probllem that you (as described in OP) "cannot mount my physical storage disk which had previously been mounted".

Technically, you have your physically ~1TB (give or take, let's not worry about this now) drive mounted on the PVE node (host), it happens to be XFS (shown as sda1 on the host) it is indeed 100% full.

This is mounted on the host into /mnt/pve/storage.

In there, you have your "virtual" disk for VM stored as QCOW2 image - this is basically a file on that XFS filesystem on that physical drive.

It has 1065314680832 bytes which is ~1TB and it appears to be completely filling up that entire host disk's filesystem of XFS.

You are then mounting this QCOW2 image into your VM (/dev/sdb on the guest as ext4 - that's filesystem of that image).

** I literally apologise if you knew all this, but it was not clear to me so far. **

At this point, I would probably take a step back, shut down the VM.

Then I would try to mount the QCOW2 directly on the host, you would need:

Code:
modprobe nbd
qemu-nbd --connect=/dev/nbd0 /mnt/pve/storage/images/444/vm-444-disk-0.qcow2

Let's see if that works out without errors.
 
Maybe @waltar got this right already (apologies if I am slow), but from what I can see (this is summary for myself or someone else if they take over), maybe also you:

You do not have a probllem that you (as described in OP) "cannot mount my physical storage disk which had previously been mounted".

Technically, you have your physically ~1TB (give or take, let's not worry about this now) drive mounted on the PVE node (host), it happens to be XFS (shown as sda1 on the host) it is indeed 100% full.

This is mounted on the host into /mnt/pve/storage.

In there, you have your "virtual" disk for VM stored as QCOW2 image - this is basically a file on that XFS filesystem on that physical drive.

It has 1065314680832 bytes which is ~1TB and it appears to be completely filling up that entire host disk's filesystem of XFS.

You are then mounting this QCOW2 image into your VM (/dev/sdb on the guest as ext4 - that's filesystem of that image).

** I literally apologise if you knew all this, but it was not clear to me so far. **

At this point, I would probably take a step back, shut down the VM.

Then I would try to mount the QCOW2 directly on the host, you would need:

Code:
modprobe nbd
qemu-nbd --connect=/dev/nbd0 /mnt/pve/storage/images/444/vm-444-disk-0.qcow2

Let's see if that works out without errors.

Appreciate you for sticking with me this far. I'll have another look tomorrow when my brain is less fried. Thanks mate!
 
Appreciate you for sticking with me this far. I'll have another look tomorrow when my brain is less fried. Thanks mate!

No worries, I am happy to follow up further too, just in case we are not around here at the same time, anyone can pick that up ... if only to copy out the valuable pieces (I suppose you do not have somewhere else to copy that file to give it more breathing space).

You should basically be able to have it then accessible as block device (on the host) at /dev/ndb0, which you can then mount into a directory, e.g. mkdir /mnt/chest, then mount /dev/ndb0 /mnt/chest and see from there. ;)
 
PS After you are done with the above, do not forget to gracefully:
Code:
umount /mnt/chest/
qemu-nbd --disconnect /dev/nbd0
 
  • Like
Reactions: waltar
Life got in the way, it wasn't "tomorrow" as I had hoped but I'm back to have another bash at this.

Maybe @waltar got this right already (apologies if I am slow), but from what I can see (this is summary for myself or someone else if they take over), maybe also you:

You do not have a probllem that you (as described in OP) "cannot mount my physical storage disk which had previously been mounted".

Technically, you have your physically ~1TB (give or take, let's not worry about this now) drive mounted on the PVE node (host), it happens to be XFS (shown as sda1 on the host) it is indeed 100% full.

This is mounted on the host into /mnt/pve/storage.

In there, you have your "virtual" disk for VM stored as QCOW2 image - this is basically a file on that XFS filesystem on that physical drive.

It has 1065314680832 bytes which is ~1TB and it appears to be completely filling up that entire host disk's filesystem of XFS.

You are then mounting this QCOW2 image into your VM (/dev/sdb on the guest as ext4 - that's filesystem of that image).

** I literally apologise if you knew all this, but it was not clear to me so far. **

At this point, I would probably take a step back, shut down the VM.

Then I would try to mount the QCOW2 directly on the host, you would need:

Code:
modprobe nbd
qemu-nbd --connect=/dev/nbd0 /mnt/pve/storage/images/444/vm-444-disk-0.qcow2

Let's see if that works out without errors.
I can get to this point without errors


But when doing this:
You should basically be able to have it then accessible as block device (on the host) at /dev/ndb0, which you can then mount into a directory, e.g. mkdir /mnt/chest, then mount /dev/ndb0 /mnt/chest and see from there. ;)
I get:
Code:
root@pve:~# mount /dev/ndb0 /mnt/chest
mount: /mnt/chest: special device /dev/ndb0 does not exist.
       dmesg(1) may have more information after failed mount system call.

ls -la of the /dev/ directory shows the following for the nbd0 entry
Code:
brw-rw----  1 root disk     43,     0 Nov  4 20:28 nbd0


I've still got questions as to how I got into this situation in the first place. Is it simply the drive being full? Like I said, everything was working nicely before.
 
What does "lsblk", "lvs", "lvdisplay" and "ls -l /dev/nbd*" show yet ?
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!