Where are the VM's?

Airw0lf

Member
Apr 11, 2021
62
3
13
60
it-visibility.net
Hi *,

See also attached screenshot: I can see that the disk-images are still there.
And the VM's are still running since the applications are working as expected.

But still - where the VM(-configs?) residing? Any suggestions to get them back?

This situation happened while playing around with a cluster config of 2 identical nodes.
Something went wrong and I followed the recommended steps to remove everything.
I'm referring to the steps between the tiles "Separate a Node Without Reinstalling" and "Quorum".
Most likely due to: "After making absolutely sure that you have the correct node name, you can simply remove the entire directory recursively from /etc/pve/nodes/NODENAME."
I did this on both nodes. But given the results I guess I should only have done that on the node-to-be-added - not on the existing one who is running all the VM's.

Any suggestions for getting these back?


With warm regards - Will

=====

Screenshot.png
 
Vm configuration is found in /etc/pve/ - https://pve.proxmox.com/pve-docs/pmxcfs-plain.html
Specifically - /etc/pve/qemu-server

Make sure you are very careful about your next step so as not to wipe away what may be remaining on your system of your configuration


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox

Mmm - but /etc/pve/qeqmu-server points to nodes/<node>/qemu-server - see also the results below:
root@tank:~# ls -l /etc/pve/qemu-server lrwxr-xr-x 1 root www-data 0 Jan 1 1970 /etc/pve/qemu-server -> nodes/tank/qemu-server

Perhaps you can give me a few more hints? So that I don't make matters worse?
 
Most likely due to: "After making absolutely sure that you have the correct node name, you can simply remove the entire directory recursively from /etc/pve/nodes/NODENAME."
I did this on both nodes.
re-reading your original post - I think you are SOL.
Hint for the future - always make a copy of the thing you are removing to a safe place.

is there anything anywhere on your systems that resembles the configuration file?
find / -name 100.conf

You can always try to recreate the file - grab the output of "ps -efwww" into a file. Create a sample VM (dont use the ID you had in use before). Look at whats in the file, change the lines to reflect proper names. Analyze the "ps" output for any advanced options.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
in the linked documentation is written...
nodes/<NAME>/qemu-server/<VMID>.conf

I think you deleted them as they were at /etc/pve/nodes/<NAME>/qemu-server/<VMID>.conf
 
Yes - I know these config-files where at nodes/<name>/...

However I deleted (with rm -r) ./nodes/<name>
Meaning the folder qemu-server and everything behind it doesn't exist anymore.

Hence my post: is there a way to recover these from a different file/database/location?
And if yes - how would I do that?
 
Last edited:
Everything in /etc/pve comes mapped from the cluster DB and that got instantly synced between all nodes.

I think if you didn't created snapshots or backups yourself these configs are deleted now.
As far as I know PVE won't keep a copy of it somewhere else.
 
Hence my post: is there a way to recover these from a different file/database/location?
if you didnt backup them up an you ran the "find" command on both nodes and it came up empty - they are gone.
And if yes - how would I do that?
I've mentioned one way above - its not a click and point and requires time and patience. You may find it easier to rebuild your environment if it contains no valuable data.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
if you didnt backup them up an you ran the "find" command on both nodes and it came up empty - they are gone.

I've mentioned one way above - its not a click and point and requires time and patience. You may find it easier to rebuild your environment if it contains no valuable data.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox

Thanks everybody for the quick and to-the-point response - really appreciated!!!

The find command came up empty on both nodes.

The output of "ps -efwww" helps me to the extend that I recognize the VM's and its config (there are only 5 or 6).
This would allow me to re-create the config-files of the VM - right?
Also because the LVM's of the VM's are still available?

How does such a config-file look like? And once I have it what do I do with it?
Ideally to the extend the VM's are revived?

With warm regards - Will
 
I've covered it already in #5, below are a bit of expanded high level pseudo steps. You will need to figure out the in-betweens and finer details yourself. If the data is critical to you - build another box and try the steps there.

- create a new VM that has a unique never used before ID, ie 500
- create a disk a or two for this VM on the storage you normally use
- view the configuration file
- copy it somewhere, ie /tmp, rename to old VMID , ie 100
- edit and change everything that looks VM specific
- put it into correct location in /etc/pve
- shut down and restart VM100, fix bugs and errors you made in the config file

A lot will depend on VM OS, ie Linux is not as particular as Windows. PCI passthrough may also throw a wrench.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
and while your VMs are running - make a backup. They may never come backup up again and you are one accidental reboot from potentially never seeing them again.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox

I already feel pretty stupid - so the next one doesn't really matter... :-)

How do I do that? Making a backup of running VM's via the CLI?
Because the webUI is not available.
 
Your options are limited to logging in to the VM and saving data with whatever is available inside the OS to some external storage.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox

Ok - lets imagine I have a sata-ssd available for a fresh install (along site with the existing disk).
Imagine I would install Debian and Proxmox on top of that.
Would that improve my changes of recovering the disk-LVM's of the existing VM's?
 
Your virtual disks shouldn't be the problem. They still should be there on your VM storage. But what you are missing are the configs, so there is no VM that could make use of the existing virtual disks. Accessing the data on your virtual disks shouldn't be a problem. You could even mount them locally on your PVE host (when your VM isnt running so you don't corrupt your data) and access the files stored on that filesystems. But your new VM configs should match as closely as possible your old configs or you won't be able to boot from those virtual disks.
 
Below the output of "ps -efwww" for one VM called sdn with vm-id 101. It looks like all VM's are there and with the same format.

/usr/bin/kvm -id 101 -name sdn -no-shutdown -chardev socket,id=qmp,path=/var/run/qemu-server/101.qmp,server=on,wait=off -mon chardev=qmp,mode=control -chardev socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5 -mon chardev=qmp-event,mode=control -pidfile /var/run/qemu-server/101.pid -daemonize -smbios type=1,uuid=d4a19c6d-6a0a-41be-bf7b-7d119c8d85fa -smp 2,sockets=1,cores=2,maxcpus=2 -nodefaults -boot menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg -vnc unix:/var/run/qemu-server/101.vnc,password=on -cpu host,+kvm_pv_eoi,+kvm_pv_unhalt -m 4096 -device pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e -device pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f -device vmgenid,guid=ebe10070-c0e9-4892-bccd-ff383315df67 -device piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2 -device usb-tablet,id=tablet,bus=uhci.0,port=1 -device VGA,id=vga,bus=pci.0,addr=0x2 -chardev socket,path=/var/run/qemu-server/101.qga,server=on,wait=off,id=qga0 -device virtio-serial,id=qga0,bus=pci.0,addr=0x8 -device virtserialport,chardev=qga0,name=org.qemu.guest_agent.0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on -iscsi initiator-name=iqn.1993-08.org.debian:01:624bbd8fbdf -drive if=none,id=drive-ide2,media=cdrom,aio=io_uring -device ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=100 -device virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5 -drive file=/dev/pve/vm-101-disk-0,if=none,id=drive-scsi0,cache=unsafe,format=raw,aio=io_uring,detect-zeroes=on -device scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=101 -netdev type=tap,id=net0,ifname=tap101i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on -device virtio-net-pci,mac=4E:B6:5A:A4:DD:FE,netdev=net0,bus=pci.0,addr=0x12,id=net0 -rtc base=localtime -machine type=pc+pve0

Based on this I would say the VM-config for this sdn/id-101 is like:
  • 1 CPU-socket, 2 cores
  • scsi-drive for boot that points to /dev/pve/vm-101-disk-0
  • ide-port with nothing
  • ide-port with CD-ROM
  • network-card
  • no passthrough or anything advanced(?)
The only thing I don't recognize is the memory part. But I guess that is like "put-in-something" and is not critical for booting as such - just for the application started later.

To me this looks enough for a manual re-creation of the original config. At least to the extend that the VM's can boot.
My doubts are more along the lines of pointing to "/dev/pve/vm-101-disk-0":

I don't recall having this in the fields when recreating config => where do I put this?
Assuming a new Proxmox install on a seperate disk as where these vm-volumes are residing.
Meaning I have access to the webUI as well as the CLI (currently there is only CLI).

Correct? Anything else to add?
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!