How to recover /var/lib/pve-cluster/config.db ?

Fathi

Renowned Member
May 13, 2016
133
4
83
52
Tunis, Tunisia
Hello,
I was unlucky setting up a cluster of two proxmox nodes, so i looked for ways to revert back and partially followed the howto lokated at https://elkano.org/blog/how-to-reset-cluster-configuration-in-proxmox-2/
Now, in the proxmox gui version 4.1, none of my VMs nor lxc containers appears.
Is there any way to recover or recreate files under /var/lib/pve-cluster/ ?

My VMs are running and i can ssh to them, but in case of a proxmox reboot, i suppose none of them will be accessible.

TIA.
 
Last edited:
No that does not help him, you cannot just "reinstall" a whole node with running VM's, I have the same situation right now, and nobody wants to give some actual useful direction...
 
  • Like
Reactions: Fathi
No that does not help him, you cannot just "reinstall" a whole node with running VM's, I have the same situation right now, and nobody wants to give some actual useful direction...

I'd suggest to restore your backup of that file :p

Asking the question, I presume you do not have a backup, so reinstall your node (or install in a vm and only replace the file itself) and backup your server.
 
Same situation here.
I successful restore the files in /var/lib/pve-cluster/ from backup but still can't see any VM in the web.
Any suggestions ?
 
  • Like
Reactions: aadursun
Hi,

I am in a similar situation, broken proxmox host, I'd like to wipe it but recover a few of the VMs before that.
Found that /etc/pve/ is empty and that's because it was part of a cluster. The other computer in the cluster no longer exist.

I found that my VM configs are stored in /var/lib/pve-cluster/config.db

But how do you get that data out (rather than restoring the whole thing, which would probably break my new proxmox server).

Salvaging a proxmox server is quite difficult ! It seems to rely on backup rather than having an easy way to move, only the stuff you want to keep, over to the new server.
 
But how do you get that data out (rather than restoring the whole thing, which would probably break my new proxmox server).
It's a SQLite database, so you can just use SQL to get the data out.

Salvaging a proxmox server is quite difficult ! It seems to rely on backup rather than having an easy way to move, only the stuff you want to keep, over to the new server.
The easy way is this database file. There is also a chapter about the filesystem in the documentation. Harder is to migrate the data off your PVE box depending on the used storage technology.

I am in a similar situation, broken proxmox host, I'd like to wipe it but recover a few of the VMs before that.
Found that /etc/pve/ is empty and that's because it was part of a cluster. The other computer in the cluster no longer exist.
Most probably a quorum issue, have you tried set the expected nodes to 1?

Code:
pvecm expected 1
 
you can also use "pmxcfs -l" to start the pmxcfs that provides /etc/pve in "local mode" (don't do this in a real cluster that you want to continue using! but in your situation, for extracting config files and then reinstalling the whole system it is okay).
 
  • Like
Reactions: Fathi and LnxBil
Hello

I opened /var/lib/pve-cluster/config.db with
https://sqlitebrowser.org/ an open source sqlite browser

The format isn't too difficult, there is one table called tree.
Each entry has a name and a data field

Name is the file name that you would expect and data is the content of the text file


1710319487602.png
 
  • Like
Reactions: bashizip
In this case I could not use pmxcfs -l because when I boot it, there is no more network, doesn't see the network card at all.
So I am extracting the data with debian installed before overwriting the system with proxmox installer.

This system was running on a hard drive with LVM2.
I am hoping that the proxmox installer only wipes pve/root and pve/data but not the VM themselves !

The stuff on there is not too important but I'd like to save it if possible.
However I have not found how to turn an LVM volume into a file that I could scp over the network.
 
the installer will format the whole disk you select as target, there is no partial re-install!
 
However I have not found how to turn an LVM volume into a file that I could scp over the network.
In Unix and Linux, everything is a file. The virtual disk content is a block device, which normally resides in /dev/<volume group name>/<volume name>. This has to be read with dd first, you cannot scp the device.

In this case I could not use pmxcfs -l because when I boot it, there is no more network, doesn't see the network card at all.
So I am extracting the data with debian installed before overwriting the system with proxmox installer.
You could just try to chroot in the other system and try to fix the problems you have at the moment.
 
Okay do any of you have the step by step command line to do this that you are describing supposedly I can just restore /etc/pve/nodes with the config.db but theres no steps above provided to do that.
 
please describe first what your issue is.. config.db is "just" an sqlite database with the file tree for /etc/pve, if the DB is not corrupt, it should be possible to restore file contents, but it might be important to know why you want to do that to prevent causing more problems ;)
 
@croso

I did not make a script but I ended up doing it manually.

Well, I only saved the logical volumes


Here is roughly, how I did it

I created a new logical volume in the destination volume group, this LV had the exact same number of bytes as the source

Then I ran dd to copy the old LV on the new one

then I used cmp to compare them bit by bit

and then I deleted the old LV


The following script isn't test but it's something close to that

Code:
# Define variables
src_vg="vg_source"
src_lv="lv_name"
dest_vg="vg_dest"
dest_lv="new_lv_name"
thin_pool="thin_pool_name"  # Name of the thin pool in the destination VG

# Get the size of the original logical volume in bytes
orig_lv_size=$(lvs --noheadings --units b -o lv_size --nosuffix /dev/${src_vg}/${src_lv} | tr -d ' ') && \

# Create the new thin logical volume with the exact same size in the destination thin pool
lvcreate --thinpool ${thin_pool} --virtualsize ${orig_lv_size}B -n ${dest_lv} ${dest_vg} && \

# Copy the contents of the original logical volume to the new thin logical volume
dd if=/dev/${src_vg}/${src_lv} of=/dev/${dest_vg}/${dest_lv} bs=1M status=progress conv=notrunc && \

# Compare the two logical volumes
cmp /dev/${src_vg}/${src_lv} /dev/${dest_vg}/${dest_lv} && \

# Delete the original logical volume if the comparison is successful
lvremove -y /dev/${src_vg}/${src_lv}


The lvm devices are something like this

Code:
root@proxmox:~# vgscan
  Found volume group "pve" using metadata type lvm2
root@proxmox:~# pvscan
  PV /dev/nvme0n1p3   VG pve             lvm2 [<446.13 GiB / 16.00 GiB free]
  Total: 1 [<446.13 GiB] / in use: 1 [<446.13 GiB] / in no VG: 0 [0   ]
root@proxmox:~# lvscan
  ACTIVE            '/dev/pve/data' [<319.61 GiB] inherit
  ACTIVE            '/dev/pve/swap' [8.00 GiB] inherit
  ACTIVE            '/dev/pve/root' [96.00 GiB] inherit
  ACTIVE            '/dev/pve/vm-100-disk-0' [32.00 GiB] inherit
  ACTIVE            '/dev/pve/vm-101-disk-0' [4.00 MiB] inherit
  ACTIVE            '/dev/pve/vm-101-disk-1' [32.00 GiB] inherit
  ACTIVE            '/dev/pve/vm-102-disk-0' [64.00 GiB] inherit
  ACTIVE            '/dev/pve/vm-103-disk-0' [32.00 GiB] inherit
  ACTIVE            '/dev/pve/iso' [200.00 GiB] inherit
  ACTIVE            '/dev/pve/ct-template' [200.00 GiB] inherit
  ACTIVE            '/dev/pve/vm-104-disk-0' [4.00 MiB] inherit
  ACTIVE            '/dev/pve/vm-104-disk-1' [32.00 GiB] inherit
  ACTIVE            '/dev/pve/vm-105-disk-0' [4.00 MiB] inherit
  ACTIVE            '/dev/pve/vm-105-disk-1' [4.00 MiB] inherit
  ACTIVE            '/dev/pve/vm-105-disk-2' [64.00 GiB] inherit
  ACTIVE            '/dev/pve/vm-107-disk-0' [64.00 GiB] inherit
  ACTIVE            '/dev/pve/stable-diffusion-models' [200.00 GiB] inherit
  ACTIVE            '/dev/pve/stable-diffusion-extensions' [200.00 GiB] inherit
  ACTIVE            '/dev/pve/stable-diffusion-webui' [200.00 GiB] inherit
  ACTIVE            '/dev/pve/vm-115-disk-0' [128.00 GiB] inherit
  ACTIVE            '/dev/pve/large-language-models' [200.00 GiB] inherit
  ACTIVE            '/dev/pve/vm-116-disk-0' [128.00 GiB] inherit
  ACTIVE            '/dev/pve/vm-125-disk-0' [128.00 GiB] inherit
  ACTIVE            '/dev/pve/vm-130-disk-0' [32.00 GiB] inherit
  ACTIVE            '/dev/pve/vm-131-disk-0' [32.00 GiB] inherit
  ACTIVE            '/dev/pve/vm-999-disk-0' [2.00 GiB] inherit
  ACTIVE            '/dev/pve/vm-1001-disk-0' [2.00 GiB] inherit

I'm surprised lvm2 does not come with a simply copylv comparelv movelv that does all of this without having to think about it as much !

Then you need to recreate the VM and LXC containers in the new server

Instead of using the sqlite file, which maybe is like a cache more than a settings page ?

I think you can get away with using the .conf files

they are in

/etc/pve/nodes/proxmox/
/etc/pve/nodes/proxmox/lxc/

I'm not sure if these can be copied directly into the new proxmox server, but a workaround could be to ask chatgpt to give you command lines to recreate them on the new server here is an example

It's probably going to work unless you had some really weird settings ?


Code:
Please give me the command line to re-create this VM on proxmox

cat /etc/pve/nodes/proxmox/qemu-server/104.conf
agent: 1
bios: ovmf
boot: order=scsi0;ide2;net0
cores: 8
cpu: x86-64-v2-AES
efidisk0: local-lvm:vm-104-disk-0,efitype=4m,pre-enrolled-keys=1,size=4M
ide2: none,media=cdrom
machine: q35
memory: 12000
meta: creation-qemu=9.0.2,ctime=1725882871
name: ubuntutest
net0: virtio=BC:24:11:29:DD:4A,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: local-lvm:vm-104-disk-1,cache=writeback,iothread=1,size=32G
scsihw: virtio-scsi-single
smbios1: uuid=18f20ed7-4c67-43e2-84bc-69935ccc3338
sockets: 1
vmgenid: 9d09e21f-f29e-4361-9623-96852245b81e

To re-create the VM with the configuration from the 104.conf file on Proxmox, you can use the qm command in Proxmox's CLI. Below is the corresponding command line to replicate the VM configuration:

bash

qm create 104 \
  --name ubuntutest \
  --bios ovmf \
  --boot order=scsi0;ide2;net0 \
  --cores 8 \
  --cpu x86-64-v2-AES \
  --efidisk0 local-lvm:vm-104-disk-0,efitype=4m,pre-enrolled-keys=1,size=4M \
  --memory 12000 \
  --machine q35 \
  --net0 virtio=BC:24:11:29:DD:4A,bridge=vmbr0,firewall=1 \
  --numa 0 \
  --ostype l26 \
  --scsi0 local-lvm:vm-104-disk-1,cache=writeback,iothread=1,size=32G \
  --scsihw virtio-scsi-single \
  --smbios1 uuid=18f20ed7-4c67-43e2-84bc-69935ccc3338 \
  --sockets 1 \
  --agent 1 \
  --vmgenid 9d09e21f-f29e-4361-9623-96852245b81e

Explanation:

    104: VM ID (qm create 104)
    --name: VM name (ubuntutest)
    --bios: BIOS type (ovmf)
    --boot: Boot order (order=scsi0;ide2;net0)
    --cores: Number of CPU cores (8)
    --cpu: CPU type (x86-64-v2-AES)
    --efidisk0: EFI disk parameters
    --memory: RAM size (12000 MB)
    --machine: Machine type (q35)
    --net0: Network configuration (virtio network interface, MAC address, bridge, and firewall)
    --numa: NUMA setting (disabled, set to 0)
    --ostype: OS type (l26 for Linux)
    --scsi0: Main disk parameters (32GB disk with writeback cache and iothread enabled)
    --scsihw: SCSI hardware type (virtio-scsi-single)
    --smbios1: SMBIOS UUID (18f20ed7-4c67-43e2-84bc-69935ccc3338)
    --sockets: Number of CPU sockets (1)
    --agent: QEMU guest agent enabled (1)
    --vmgenid: VM Generation ID (9d09e21f-f29e-4361-9623-96852245b81e)

Make sure the VM ID (104) is not in use, and ensure that the local-lvm storage pool and other resources are available in your Proxmox setup.
 
Instead of using the sqlite file, which maybe is like a cache more than a settings page ?

It is exactly the same as what is mounted at runtime into /etc/pve, there's some extra dotfiles that you are not after at runtime there, but that's it.

I think you can get away with using the .conf files

they are in

/etc/pve/nodes/proxmox/
/etc/pve/nodes/proxmox/lxc/

You have the configs in /etc/pve/local (for the current node it is symlinked) and then lxc or qemu-server (for vms). Yes, you can just copy it out, you need to have it mounted though, for which you need to have some PVE install working.

If you have a dump from a .db file, e.g. see here: https://forum.proxmox.com/threads/backup-cluster-config-pmxcfs-etc-pve.154569/

Then you can just insert selected files (table rows) into your new blank database, literally trim that .dump file and keep what you need in it. But you would need to properly have the key numbers referring to the right directory. I would just dump them into extra directory from which you copy it around as you need. If you want to peek what the content of the files is for some reason before inserting them (you can always delete them as mounted), then you can do so with:

SQL:
# This is a line in the .dump file that is of interest
INSERT INTO tree VALUES(599700,12,599709,1,1727345714,8,'lrm_status',X'7b22726573756c7473223a7b7d2c227374617465223a22776169745f666f725f6167656e745f6c6f636b222c2274696d657374616d70223a313732373334353731342c226d6f6465223a22616374697665227d');

Bash:
# Let's see what is inside the file
xxd -r -p <<< X'7b22726573756c7473223a7b7d2c227374617465223a22776169745f666f725f6167656e745f6c6f636b222c2274696d657374616d70223a313732373334353731342c226d6f6465223a22616374697665227d'; echo

{"results":{},"state":"wait_for_agent_lock","timestamp":1727345714,"mode":"active"}

I'm not sure if these can be copied directly into the new proxmox server, but a workaround could be to ask chatgpt

They can. You literally asked for a command to create something you already had at hand.
 
Last edited:
Ok, so is there anything other than the ####.conf files that needs to be copied over to the new server when doing a manual copy ?

I have two scenarios that I'm most concerned with.


One is, I am on the new proxmox server and I have attached the storage drive from the broken server and the question is how to migrate the old VM and CT to the new server.

The solution to that seems to be, look in the ###.conf file for lvm (or other) partitions, copy and compare to the new storage and then copy the ###.conf file to the right place.

Would that have been all that it takes to do it ?


And the second scenario would be to have a "single file intermediary"

so create a file that contains each of the partition and include the ###.conf file

and then zip all that up into a single file. Now that single, I could copy to any proxmox server, do the reverse process and have my old VM running.


(And optionally, how to copy a live snapshot in this method, for instance for software with cloud restriction that would be snapshotted with the restriction pre-fullfilled for long term preservation)

From that I would like to make a script for each of the scenario possible into a single action command like


CopyMachineFrom /mnt/old-pve-root/ /mnt/new-pve-root/ 101 102 103 104
This would copy the machine id numbers (whether they are VM or CT, including live snapshots) to the new proxmox server, in a single command reliably

CopyMachineToFIle /mnt/old-pve-root/ /root/ 101 102 103 104
This would create zip files 101 102 103 104 .zip for each CT/VM and the zip file would contain everything to re-create the machines on any other proxmox server

Then you would use
CopyMachineFromFile /mnt/new-pve-root/ /root/ 101.zip 102.zip 103.zip 104.zip
And that would in a single action, create the VM/CT exactly as it was when CopyMachineToFIle was run on the source promox server


I suspect the proxmox cluster stuff already can do something like this, so maybe I don't need to code that up from scratch ?

I asked chatgpt, https://chatgpt.com/share/66f7745f-4660-8005-9bc6-9f8e4982289e


It suggests using vzdump / qmrestore / pct restore
 
Ok, so is there anything other than the ####.conf files that needs to be copied over to the new server when doing a manual copy ?
That depends on how you setup your system. Other potentially necessary files are
  • the firewall settings per VM and/or datacenter firewall settings
  • storage setup / layout e.g. nfs/cifs mounts
  • users and domain logon options (e.g. OpenID, LDAP)

One is, I am on the new proxmox server and I have attached the storage drive from the broken server and the question is how to migrate the old VM and CT to the new server.

The solution to that seems to be, look in the ###.conf file for lvm (or other) partitions, copy and compare to the new storage and then copy the ###.conf file to the right place.

Would that have been all that it takes to do it ?
Yes. That simple.


And the second scenario would be to have a "single file intermediary"

so create a file that contains each of the partition and include the ###.conf file

and then zip all that up into a single file. Now that single, I could copy to any proxmox server, do the reverse process and have my old VM running.

That would be backup & restore built in, yet manually using vzdump without a working PVE may not work.


(And optionally, how to copy a live snapshot in this method, for instance for software with cloud restriction that would be snapshotted with the restriction pre-fullfilled for long term preservation)

From that I would like to make a script for each of the scenario possible into a single action command like

CopyMachineFrom /mnt/old-pve-root/ /mnt/new-pve-root/ 101 102 103 104
use the built-in backup & restore via vzdump or use PBS.


I suspect the proxmox cluster stuff already can do something like this, so maybe I don't need to code that up from scratch ?
I asked chatgpt, https://chatgpt.com/share/66f7745f-4660-8005-9bc6-9f8e4982289e
It suggests using vzdump / qmrestore / pct restore
Yes, as do I
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!