Renaming node -> volumes not reachable

jlgarnier

Member
May 25, 2021
25
2
8
Auriol, France
Dear Community,

On my lab servers, I decided to rename one node, which had already running VMs. I followed the instructions given in the video here: https://www.youtube.com/watch?v=2NzLYKRVRtk and managed to perform the renaming without any problem.

However, I have some issues with existing volumes. Here's the current storage configuration:

Capture d’écran 2024-11-15 140200.png

What happens:
  • I have a 'backups' volume for it to store daily backups, which is now not reachable (msg: error listing snapshots - 400 Bad Request (500))
  • LVM disk displays a hierarchy originating from the old node name: pve/dev/sda3.
  • LVM-Thin disk has a 'data' volume which is still attached to the previous name (volume group = pve)
  • I also have a local-lvm volume which is also not reachable (msg: no such logical volume NEWNAME/data (500))...
Thus Proxmox Backup Server doesn't work as expected and previous backups are not accessible. Unfortunately, I used PBS to backup the VMs I wanted to move to another node...

Capture d’écran 2024-11-15 144308.png

What's your advice to fix this? It looks like the renaming process has missed something...

Thanks in advance for any help!
 
Hi,
please share the output of the following commands
Code:
pveversion -v
cat /etc/pve/storage.cfg
lvs
vgs
pvs
 
Hi @fiona,

Here the outputs for the commands above:

Code:
root@LAB-server1:~# pveversion -v
proxmox-ve: 7.4-1 (running kernel: 5.15.158-2-pve)
pve-manager: 7.4-19 (running version: 7.4-19/f98bf8d4)
pve-kernel-5.15: 7.4-15
pve-kernel-5.13: 7.1-9
pve-kernel-5.4: 6.4-4
pve-kernel-5.15.158-2-pve: 5.15.158-2
pve-kernel-5.15.158-1-pve: 5.15.158-1
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-2-pve: 5.13.19-4
pve-kernel-5.4.124-1-pve: 5.4.124-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
ceph-fuse: 14.2.21-1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx4
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4.3
libpve-apiclient-perl: 3.2-2
libpve-common-perl: 7.4-2
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.3.0
libpve-rs-perl: 0.7.7
libpve-storage-perl: 7.4-4
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.4.7-1
proxmox-backup-file-restore: 2.4.7-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.2
proxmox-widget-toolkit: 3.7.4
pve-cluster: 7.3-3
pve-container: 4.4-7
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-4~bpo11+3
pve-firewall: 4.3-5
pve-firmware: 3.6-6
pve-ha-manager: 3.6.1
pve-i18n: 2.12-1
pve-qemu-kvm: 7.2.10-1
pve-xtermjs: 4.16.0-2
qemu-server: 7.4-6
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.15-pve1

Code:
root@LAB-server1:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content backup,iso,vztmpl

lvmthin: local-lvm
        thinpool data
        vgname LAB-server1
        content rootdir,images

zfspool: data
        pool data
        blocksize 64k
        content rootdir,images
        mountpoint /data
        nodes LAB-server1
        sparse 1

pbs: backups
        datastore datastore1
        server 192.168.1.101
        content backup
        fingerprint 60:be:b6:bb:85:56:ef:17:3a:d6:6a:48:d8:31:cc:23:ff:16:34:e7:2b:0a:47:d8:07:66:5f:b0:2c:e8:d7:d1
        prune-backups keep-all=1
        username root@pam

Code:
root@LAB-server1:~# lvs
  LV   VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data pve twi-a-tz-- <64.49g             0.00   1.60                            
  root pve -wi-ao----  29.50g                                                    
  swap pve -wi-ao----   8.00g


Code:
root@LAB-server1:~# vgs
  VG  #PV #LV #SN Attr   VSize    VFree 
  pve   1   3   0 wz--n- <118.74g 14.75g

Code:
root@LAB-server1:~# vgs
  VG  #PV #LV #SN Attr   VSize    VFree 
  pve   1   3   0 wz--n- <118.74g 14.75g

Hope this helps! Thank you for your help!
 
Regarding the local-lvm storage, you should be able to just edit the storage configuration and use vgname pve instead of vgname LAB-server1. However, it seems there are no logical volumes in that thin pool yet. If that is not what you expect, please also share the output of lsblk -f.

Regarding the backup storage, does the PBS server boot up correctly? Are fingerprint and IP in the configuration correct?

P.S. Proxmox VE 7 is end-of-life since the end of July, please consider upgrading (after sorting out the current issue):
https://pve.proxmox.com/wiki/Upgrade_from_7_to_8
 
Thanks Fiona!

Here's the command output:
Code:
root@LAB-server1:~# lsblk -f
NAME FSTYPE FSVER LABEL UUID                                   FSAVAIL FSUSE% MOUNTPOINT
sda                                                                           
├─sda1
│                                                                             
├─sda2
│    vfat   FAT32       D361-23D9                               510.7M     0% /boot/efi
└─sda3
     LVM2_m LVM2        BO76C1-5wC1-gh2f-L2eI-kN61-flfX-N4fh8C                
  ├─pve-swap
  │  swap   1           9a3265c7-8a29-4713-a30d-1c680791b146                  [SWAP]
  ├─pve-root
  │  ext4   1.0         132acdd1-8341-4923-87dc-618fa6793aa3      5.7G    75% /
  ├─pve-data_tmeta
  │                                                                           
  │ └─pve-data
  │                                                                             
  └─pve-data_tdata
                                                                              
    └─pve-data
                                                                                
sdb                                                                           
├─sdb1
│    zfs_me 5000  data  10730570118294481340                                  
└─sdb9
                                                                              
sdc                                                                           
├─sdc1
│    zfs_me 5000  data  10730570118294481340                                  
└─sdc9
                                                                              
sdd                                                                           
├─sdd1
│    zfs_me 5000  data  10730570118294481340                                  
└─sdd9
                                                                              
sde                                                                           
└─sde1
     ext4   1.0         b4d11d35-086a-489b-b5ac-15e1d0ce1f9d                  
zd0                                                                           
├─zd0p1
│    ntfs         System Reserved
│                       EA9C4AD09C4A96CB                                      
├─zd0p2
│    ntfs               4A364B4F364B3B69                                      
└─zd0p3
     ntfs               FCBA901FBA8FD514                                      
zd16                                                                          
zd32                                                                          
├─zd32p1
│                                                                             
└─zd32p2
     ext4   1.0         276d2bce-3c80-4989-a147-3e7128e034d6                  
zd48                                                                          
zd64                                                                          
zd80                                                                          
└─zd80p1
     ext4   1.0   omvdata
                        2b1e0d40-0983-460d-abc7-8c2a8952e92a                  
zd96                                                                          
├─zd96p1
│    ext4   1.0         22dd937d-ec61-4d98-8aa9-719e9b444811                  
├─zd96p2
│                                                                             
└─zd96p5
     swap   1           8d627166-e3e1-4def-85a0-f41c32e519b9                  
zd112
│                                                                             
├─zd112p1
│                                                                             
├─zd112p2
│    vfat   FAT32       EA27-7129                                             
└─zd112p3
     LVM2_m LVM2        KYiEat-Kvas-BA80-CzBb-T2Fp-VfTs-oRyNBu                
zd128
│                                                                             
├─zd128p1
│    ext4   1.0         d9427948-9580-42c0-9f46-932f0f60bdde                  
├─zd128p2
│                                                                             
└─zd128p5
     swap   1           a42d68a3-a4e1-462a-81ae-23d63be02b10

Changing the storage config fixed the issue and I can now access the lvm-thin volume.

Regarding the PBS server, everything boots as expected, I have access to the console, the IP looks good but I don't know how to verify the fingerprint: where can I find this value?

Thanks for your help!
 
Code:
sde                                                                           
└─sde1
     ext4   1.0         b4d11d35-086a-489b-b5ac-15e1d0ce1f9d
So apart from the disks with LVM (which also contains the root filesystem of the host as a logical volume) and ZFS, you also have this one with an ext4 file system. To see what's on it, you'll need to mount it. To use it in Proxmox VE, you can add it as a directory storage after creating an fstab entry (or systemd mount unit) to make the mount persistent. After adding it, it's recommended to use pvesm set <name of the storage> --is_mountpoint 1, then Proxmox VE will check if it's correctly mounted before writing to it.

Changing the storage config fixed the issue and I can now access the lvm-thin volume.

Regarding the PBS server, everything boots as expected, I have access to the console, the IP looks good but I don't know how to verify the fingerprint: where can I find this value?
In the Dashboard, there is a Show Fingerprint button. Can you ping the IP of the PBS from the Proxmox VE host? Does the datastore exist and is accessible inside PBS?
 
Hi Fiona, thanks for the information!

I suspect sde1 is a USB disk which is used by a VM as a video files storage (Open Media Vault), so I won't bother you with this...

Regarding PBS:
  • The fingerprint has the correct value in storage.cfg.
  • PVE is able to ping PBS.
  • The datastore cannot be accessed from PBS: selecting datastore1 (in PBS GUI) returns "unable to open chunk store 'datastore1' at "/mnt/datastore/backups/datastore1/.chunks" - No such file or directory (os error 2)".

What else can I check here?

Thanks in advance for your help!
 
  • The datastore cannot be accessed from PBS: selecting datastore1 (in PBS GUI) returns "unable to open chunk store 'datastore1' at "/mnt/datastore/backups/datastore1/.chunks" - No such file or directory (os error 2)".
Then that needs to be fixed. Please check the disks inside PBS similar to how you did on the host, the output of cat /etc/proxmox-backup/datastore.cfg and findmnt /mnt/datastore/datastore1 (to see if something is mounted there).
 
Hi Fiona,

Here's the output of the various commands you requested for PVE, probably too much...

Code:
root@pbs:~# cat /etc/proxmox-backup/datastore.cfg
datastore: datastore1
        comment 
        gc-schedule daily
        path /mnt/datastore/backups/datastore1

root@pbs:~# lvs
File descriptor 21 (/var/log/proxmox-backup/tasks/CB/UPID:pbs:00000235:00002ECB:00000000:67499A3C:termproxy::root@pam:) leaked on lvs invocation. Parent PID 40339: -bash
  LV   VG  Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root pbs -wi-ao---- 23.75g                                                    
  swap pbs -wi-ao---- <3.88g  

root@pbs:~# vgs
File descriptor 21 (/var/log/proxmox-backup/tasks/CB/UPID:pbs:00000235:00002ECB:00000000:67499A3C:termproxy::root@pam:) leaked on vgs invocation. Parent PID 40339: -bash
  VG  #PV #LV #SN Attr   VSize   VFree
  pbs   1   2   0 wz--n- <31.50g 3.87g

root@pbs:~# lsblk -f
NAME         FSTYPE      FSVER    LABEL UUID                                   FSAVAIL FSUSE% MOUNTPOINTS
sda                                                                                           
├─sda1                                                                                        
├─sda2       vfat        FAT32          EA27-7129                                             
└─sda3       LVM2_member LVM2 001       KYiEat-Kvas-BA80-CzBb-T2Fp-VfTs-oRyNBu                
  ├─pbs-swap swap        1              643e1a88-c67c-400b-b289-2cde8241ea8f                  [SWAP]
  └─pbs-root ext4        1.0            7dc549ca-a5b4-4613-9e1a-8342c4bb06b9     16.5G    24% /
sr0          iso9660              PBS   2021-07-12-17-27-57-00

The command findmnt /mnt/datastore/datastore1 doesn't return anything.

Thanks again for your help!
 
What does ls -al /mnt/datastore/datastore1 say? It seems like you don't have additional disks attached to the PBS, so if nothing is in the directory currently, the data is elsewhere. If you can't find it, you can still re-create the datastore for future backups.
 
Surprinsingly...

Code:
root@pbs:~# ls -al /mnt/datastore/datastore1
ls: cannot access '/mnt/datastore/datastore1': No such file or directory

I SSHed into the PBS server and confirmed there's no datastore1 folder. There's however a 'backups' folder, but it's empty. This folder was normally used by PVE as the storage for VM backups and should not be empty! Actually I was expecting to move VMs to another node by restoring those backups...

I agree the data must be elsewhere but how can I find their location? I've tried to explore /dev/disk/by-uuid but to no avail...

What's you opinion: is this issue related to the renaming operation or is there some other cause? PBS was working pretty well before this operation...
 
Last edited:
There's however a 'backups' folder, but it's empty.
Did you check with ls -al?
This folder was normally used by PVE as the storage for VM backups and should not be empty! Actually I was expecting to move VMs to another node by restoring those backups...
Do you mean for non-PBS backups or for PBS backups? In the latter case, it never was a folder on the host, just a storage definition for how to connect to the PBS.
I agree the data must be elsewhere but how can I find their location? I've tried to explore /dev/disk/by-uuid but to no avail...
Please check the PBS VM's Task History. Did it ever have other disks connected?
What's you opinion: is this issue related to the renaming operation or is there some other cause? PBS was working pretty well before this operation...
I don't know what exact commands you used, but if done correctly then no, a rename operation should not change the storage definitions or VM configurations.
 
Did you check with ls -al?
Yes, this folder is empty on the PBS server.

Do you mean for non-PBS backups or for PBS backups? In the latter case, it never was a folder on the host, just a storage definition for how to connect to the PBS.
Actually, there's a /mnt/datastore/backups folder on the PBS server, and a /mnt/pve/backups folder on the PVE, so I may has mistaken one for the other. However, both are totally empty (ls -al).

Please check the PBS VM's Task History. Did it ever have other disks connected?
It seems some operations were correctly performed in the past 30 days...
1732890197039.png
I don't know what exact commands you used, but if done correctly then no, a rename operation should not change the storage definitions or VM configurations.
Here are the commands I used:
Code:
nano /etc/hosts -> change <old-name> to <new-name>
nano /etc/hostname -> change <old-name> to <new-name>
nano /etc/postfix/main.cf -> if required
hostnamectl set-hostname <new-name>
systemctl restart pveproxy

SSH into <new-name>
systemctl restart pvedaemon
ls /etc/pve/nodes
cp -R /etc/pve/nodes/pve /root/pvebackup -> don't take a chance
mv /etc/pve/nodes/<old-name>/lxc/* /etc/pve/nodes/<new-name>/lxc
mv /etc/pve/nodes/<old-name>/qemu-server/* /etc/pve/nodes/<new-name>/qemu-server
rm -r /etc/pve/<old-name>
reboot

SSH into <new-name>
nano /etc/pve/storage.cfg -> change <old-name> to <new-name>

Seems rather inoffensive...
 
Yes, this folder is empty on the PBS server.


Actually, there's a /mnt/datastore/backups folder on the PBS server, and a /mnt/pve/backups folder on the PVE, so I may has mistaken one for the other. However, both are totally empty (ls -al).


It seems some operations were correctly performed in the past 30 days...
View attachment 78451
Please share the full systems logs/journal for the backup server and for the Proxmox VE node from before the rename until now.

Code:
nano /etc/pve/storage.cfg -> change <old-name> to <new-name>
That explains why the volume group name was incorrect. You only want to change references to the node.

The other commands don't affect any disk-related settings.
 
Please share the full systems logs/journal for the backup server and for the Proxmox VE node from before the rename until now.
Hi Fiona!
Can you please tell me exactly which files could be helpful? All I find is a /var/log/pve/tasks folder with tens of logfiles in it, whether for PVE or PBS...

That explains why the volume group name was incorrect. You only want to change references to the node.
Is there way to fix this?

Thanks for your help!
 
Can you please tell me exactly which files could be helpful? All I find is a /var/log/pve/tasks folder with tens of logfiles in it, whether for PVE or PBS...
You can access the journal with the journalctl command, e.g. journalctl --since=2024-11-14 to get the journal since November 14th and to dump it into a file journalctl --since=2024-11-14 > /tmp/journal.txt

Is there way to fix this?
You already did:
Changing the storage config fixed the issue and I can now access the lvm-thin volume.
 
After the reboot of the host, there are already errors about the missing chunk store:
Code:
Nov 15 08:51:59 LAB-server1 pvestatd[2556]: proxmox-backup-client failed: Error: unable to open chunk store 'datastore1' at "/mnt/datastore/backups/datastore1/.chunks" - No such file or directory (os error 2)
Did you modify anything about the PBS VM configuration around the time of the rename? What about the logs insisde the PBS VM?