"Volume does not exist" after IP change

Blackmyre

New Member
Feb 14, 2023
13
0
1
I temporarily changed the IP address of my Proxmox host machine (v8.0.3). After changing back and rebooting again, my containers refuse to start. I'm getting the following errors:
TASK ERROR: volume 'files:100/vm-100-disk-0.raw' does not exist TASK ERROR: volume 'files:103/vm-103-disk-0.raw' does not exist

I'm very inexperienced with Proxmox and as I haven't done anything with it in the months since I set this up I'm afraid what little I learned has largely evaporated, so I don't really know where to start trying to identify and fix this. I have no idea why an IP address change should cause problems and I'm scared to try anything that might make the situation unrecoverable (if it isn't already).

Can anyone please shed some light on what might have happened, and hopefully guide me through fixing it?
 
Thanks for the prompt response, appreciated.

Code:
dir: local
    path /var/lib/vz
    content images,backup,iso,vztmpl,snippets
    shared 0


lvmthin: local-lvm
    thinpool data
    vgname pve
    content images,rootdir


dir: files
    path /mnt/pve/files
    content images,rootdir
    is_mountpoint 1


lvmthin: sdb1-thin
    thinpool sdb1-thin
    vgname sdb1-thin
    content images,rootdir
    nodes asgard
 
The fstab contents are just:
Code:
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
UUID=7545-7126 /boot/efi vfat defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0

... because it's implemented using a systemd mount unit directly for this instead of fstab. My notes indicate something about having needed to do this because using fstab failed to produce an entry in the pve/Disks/Directory node of the web UI. Apparently due to fstab entries being mapped to units created in /run/systemd/generator/ and Proxmox only seeming to recognise mount units under /etc/systemd/system.
Code:
systemctl status mnt-pve-files.mount
● mnt-pve-files.mount - Mount storage 'files' under /mnt/pve
     Loaded: loaded (/etc/systemd/system/mnt-pve-files.mount; enabled; preset: enabled)
     Active: active (mounted) since Mon 2023-10-09 09:57:56 BST; 3h 30min ago
      Where: /mnt/pve/files
       What: /dev/sdb2
      Tasks: 0 (limit: 18936)
     Memory: 992.0K
        CPU: 20ms
     CGroup: /system.slice/mnt-pve-files.mount

The directory is indeed mounted and the content is there.
 
The fstab contents are just:
Code:
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
UUID=7545-7126 /boot/efi vfat defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0

... because it's implemented using a systemd mount unit directly for this instead of fstab. My notes indicate something about having needed to do this because using fstab failed to produce an entry in the pve/Disks/Directory node of the web UI. Apparently due to fstab entries being mapped to units created in /run/systemd/generator/ and Proxmox only seeming to recognise mount units under /etc/systemd/system.
Code:
systemctl status mnt-pve-files.mount
● mnt-pve-files.mount - Mount storage 'files' under /mnt/pve
     Loaded: loaded (/etc/systemd/system/mnt-pve-files.mount; enabled; preset: enabled)
     Active: active (mounted) since Mon 2023-10-09 09:57:56 BST; 3h 30min ago
      Where: /mnt/pve/files
       What: /dev/sdb2
      Tasks: 0 (limit: 18936)
     Memory: 992.0K
        CPU: 20ms
     CGroup: /system.slice/mnt-pve-files.mount

The directory is indeed mounted and the content is there.
Hi,
are you sure the disk is there? Check the output of pvesm list files. If the disks are there, please post also the output of pct config <VMID> --current for these containers
 
are you sure the disk is there?
I can see the contents of the directory on which it's mounted so yes, I think it must be there (unless I'm misunderstanding).
pvesm list files produces the following, in which I'm assuming I can ignore the perl warnings:

Code:
perl: warning: Please check that your locale settings:
    LANGUAGE = (unset),
    LC_ALL = (unset),
    LC_ADDRESS = "en_GB.UTF-8",
    LC_NAME = "en_GB.UTF-8",
    LC_MONETARY = "en_GB.UTF-8",
    LC_PAPER = "en_GB.UTF-8",
    LC_IDENTIFICATION = "en_GB.UTF-8",
    LC_TELEPHONE = "en_GB.UTF-8",
    LC_MEASUREMENT = "en_GB.UTF-8",
    LC_TIME = "en_GB.UTF-8",
    LC_NUMERIC = "en_GB.UTF-8",
    LANG = "en_US.UTF-8"
    are supported and installed on your system.
perl: warning: Falling back to a fallback locale ("en_US.UTF-8").
Volid                       Format  Type            Size VMID
files:110/vm-110-disk-0.raw raw     rootdir   8589934592 110

pct config 100 --current:
Code:
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
    LANGUAGE = (unset),
    LC_ALL = (unset),
    LC_ADDRESS = "en_GB.UTF-8",
    LC_NAME = "en_GB.UTF-8",
    LC_MONETARY = "en_GB.UTF-8",
    LC_PAPER = "en_GB.UTF-8",
    LC_IDENTIFICATION = "en_GB.UTF-8",
    LC_TELEPHONE = "en_GB.UTF-8",
    LC_MEASUREMENT = "en_GB.UTF-8",
    LC_TIME = "en_GB.UTF-8",
    LC_NUMERIC = "en_GB.UTF-8",
    LANG = "en_US.UTF-8"
    are supported and installed on your system.
perl: warning: Falling back to a fallback locale ("en_US.UTF-8").
arch: amd64
cores: 2
description: # Container 'mim' is the file server%0A
features: nesting=1
hostname: mim
memory: 512
mp0: sdb1-thin:vm-100-disk-2,mp=/srv/sdb1-thin-2,size=1000G
mp1: /mnt/pve/files,mp=/srv/storage,size=0T
mp3: sdb1-thin:vm-100-disk-0,mp=/srv/sdb1-thin,size=1000G
mp4: files:100/vm-100-disk-0.raw,mp=/srv/storage-8gb,size=8G
mp5: /mnt/pve/files,mp=/srv/storage-dir2,size=0T
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.1.1,hwaddr=66:66:19:8D:2B:EF,ip=192.168.1.21/24,type=veth
onboot: 1
ostype: debian
rootfs: sdb1-thin:vm-100-disk-1,size=8G
swap: 512
unprivileged: 1

Code:
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
    LANGUAGE = (unset),
    LC_ALL = (unset),
    LC_ADDRESS = "en_GB.UTF-8",
    LC_NAME = "en_GB.UTF-8",
    LC_MONETARY = "en_GB.UTF-8",
    LC_PAPER = "en_GB.UTF-8",
    LC_IDENTIFICATION = "en_GB.UTF-8",
    LC_TELEPHONE = "en_GB.UTF-8",
    LC_MEASUREMENT = "en_GB.UTF-8",
    LC_TIME = "en_GB.UTF-8",
    LC_NUMERIC = "en_GB.UTF-8",
    LANG = "en_US.UTF-8"
    are supported and installed on your system.
perl: warning: Falling back to a fallback locale ("en_US.UTF-8").
arch: amd64
cores: 4
description: # Container 'mist' provides web and database services%0A
features: nesting=1
hostname: mist
memory: 4096
mp0: sdb1-thin:vm-103-disk-0,mp=/srv/data,size=8G
mp9: files:103/vm-103-disk-2.raw,mp=/srv/storage,backup=1,size=1000G
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.1.1,hwaddr=DE:85:AB:23:87:9C,ip=192.168.1.22/24,type=veth
onboot: 1
ostype: debian
rootfs: files:103/vm-103-disk-0.raw,size=8G
swap: 512
unprivileged: 1
 
I can see the contents of the directory on which it's mounted so yes, I think it must be there (unless I'm misunderstanding).
I was referring to the LXCs disk, not the physical disk mounted on the mountpoint, sorry for not being more clear.
files:110/vm-110-disk-0.raw raw rootdir 8589934592 110
There seems to only be this disk present on that storage.
So either the files are not there anymore or you changed the mount point file layout at some point? Maybe you had another disk mounted on top? This is definitely not caused by the IP change, but probably because you had some manual config adjustment which was not persistent and got lost during reboot.

Check the contents of the mountpoint under /mnt/pve/files as suggeset by @LnxBil. ls -laRh /mnt/pve/files/images should list all contents recursively

If the files are not there, did you perform a backup of the containers? You might be able to quickly restore from backup.
 
I was referring to the LXCs disk, not the physical disk mounted on the mountpoint
Ah, sorry - said I'm very inexperienced with Proxmox, I'm quite shaky on the whole subject of "storage" items.
This is definitely not caused by the IP change, but probably because you had some manual config adjustment which was not persistent and got lost during reboot.
Not being related to the IP change is good. I couldn't see how it could possibly be, but that's all I've changed very recently except for creating a little test container this morning (that will be the one with ID 110, which I've just confirmed does start) but it's disturbing if I made a change that trashed the containers' disks. The boot history indicates the last previous reboot was in early September and I don't remember making any changes since then. Willing to accept that I must have done though.

Check the contents of the mountpoint under /mnt/pve/files as suggeset by @LnxBil
Yes, vm-110 is indeed the only one there (sorry @LnxBil, I missed that comment of yours).
If the files are not there, did you perform a backup of the containers? You might be able to quickly restore from backup.
Yes, I have a backup of both missing containers, I took them prior to upgrading Proxmox from 7.3.6 to 8.0. Can I assume that they will be compatible with 8.0, and that restoring from them won't affect any stored data?
 
Not being related to the IP change is good. I couldn't see how it could possibly be, but that's all I've changed very recently except for creating a little test container this morning (that will be the one with ID 110, which I've just confirmed does start) but it's disturbing if I made a change that trashed the containers' disks. The boot history indicates the last previous reboot was in early September and I don't remember making any changes since then. Willing to accept that I must have done though.
Could it be the other way around and the storage has not been mounted correctly, so the disks are located below the mountpoint? However, you correctly set the `is_mountpoint` flag, nevertheless might be worth to check that.

Yes, I have a backup of both missing containers, I took them prior to upgrading Proxmox from 7.3.6 to 8.0. Can I assume that they will be compatible with 8.0, and that restoring from them won't affect any stored data?
Yes, backups are compatible, but the LXCs will be in the state of the backup after restore. Alternatively, instead of restoring over the existing container, you can restore to a different VMID and recover the disks from that container by placing it on the correct storage and renaming the disk images.
 
Could it be the other way around and the storage has not been mounted correctly, so the disks are located below the mountpoint?
I'm not really sure I understand what that means, but I searched for vm*.raw from root and the only one found was the 110 for the test container I created yesterday.

I recovered the disks by following your suggestion of restoring to a different VMID (thanks for that tip) and I'm up and running again, albeit with some work needed to bring things up to date. I have good backups of all my data so nothing irrecoverable has been lost, but it would clearly have been better if I had taken more recent backups of the containers themselves; lesson learned there. I'm still baffled by the disappearance of the disk files though. The containers were both working properly right up to the reboot of the pve yesterday, so presumably I did something stupid in the configuration that meant they weren't persisted (and probably still aren't). Are there any typical newbie mistakes that could cause that? I guess I should take new container backups today, then restart pve again and watch what happens.

Thanks (both) for all the help with this, it's much appreciated.
 
I'm not really sure I understand what that means, but I searched for vm*.raw from root and the only one found was the 110 for the test container I created yesterday.
What I meant here is that if the storage disk was not mounted correctly during creation of the container disks, the disks will be placed in the corresponding folder on the root filesystem. If the storage is then mounted afterwards, the disks below are not visible anymore but present. But you have the is_mountpoint option set, signaling that this should be a mountpoint and checked by the storage plugin.

I'm still baffled by the disappearance of the disk files though. The containers were both working properly right up to the reboot of the pve yesterday, so presumably I did something stupid in the configuration that meant they weren't persisted (and probably still aren't). Are there any typical newbie mistakes that could cause that? I guess I should take new container backups today, then restart pve again and watch what happens.
The only thing I see that could have changed is the mountpoint. As stated, either the disk was not mounted correctly before or the wrong disk got mounted? Have you checked that /dev/sdb2 is the correct disk and partition? lsblk -o +FSTYPE,LABEL,UUID or the output of ls -l /dev/disk/by-id/ might give you a clue if that is the case. Also check with mount which disk is mounted where and if this is what you expect.
 
Well, I've got everything working again. Just wanted to report back that I've rebooted my Proxmox machine without problems - the containers come back up again without losing their virtual disks. My conclusion is that I must have done something stupid previously.

As I now know, my virtual disks are stored on the Directory storage that's being used by one of the containers that's acting as a file server. I vaguely recall wanting to make sure they were somewhere that wouldn't be wiped by a new Proxmox installation, but I'm not sure this location is the most. Can I safely move them elsewhere, and is there a conventional "best practice" for where they're stored?
 
Well, I've got everything working again. Just wanted to report back that I've rebooted my Proxmox machine without problems - the containers come back up again without losing their virtual disks. My conclusion is that I must have done something stupid previously.

As I now know, my virtual disks are stored on the Directory storage that's being used by one of the containers that's acting as a file server. I vaguely recall wanting to make sure they were somewhere that wouldn't be wiped by a new Proxmox installation, but I'm not sure this location is the most. Can I safely move them elsewhere, and is there a conventional "best practice" for where they're stored?
I am not sure I understand your question correctly, but a typical setup might put the Proxmox VE installation on dedicated disks, while keeping the VM/CT images on either local storage or some form of shared storage. But this hugely depends on available hardware and requirements.

I strongly suggest to setup a backup solution which performs periodic backups. That should protect you better against data loss than just putting the virtual disks on a different physical disk.
 
Fair point about good backups making the location of virtual disks. I have good backups of all my data but haven't really worried too much about system backups as they can always be reinstalled, but it's certainly much easier to just restore from a backup than rebuilding. I'll read up on Proxmox backups and snapshots and see how I can include them in my regular routines. Thanks again for your help.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!