New installation can't see VM

arky_

Member
Sep 25, 2020
13
0
6
51
I have damaged 2 disks on which the Proxmox system was installed. The VMs were on 2 others configured in mirror and zfs (one of these disks also failed).

After reinstalling the system and reconnecting the drive + pool (oldpool) zfs, I can't see my VMs in the GUI. In the console you can already see that they are physically.

Question: How do I restore all VMs to be visible in the GUI?
Possibly How to remove unnecessary VM?

Code:
root@proxmox1:~# zfs list
NAME                            USED  AVAIL     REFER  MOUNTPOINT
newpool                        62.3G  1.70T      104K  /newpool
newpool/VMBackup               21.8G  1.70T     21.8G  /newpool/VMBackup
newpool/VMData                 40.6G  1.70T     40.6G  /newpool/VMData
oldpool                         768G   131G      104K  /oldpool
oldpool/ROOT                    114G   131G       96K  /oldpool/ROOT
oldpool/ROOT/pve-1              114G   131G      114G  /
oldpool/data                    447G   131G       96K  /oldpool/data
oldpool/data/vm-101-disk-0      135G   131G      135G  -
oldpool/data/vm-106-disk-0     59.2G   131G     37.1G  -
oldpool/data/vm-106-disk-1       60K   131G       60K  -
oldpool/data/vm-106-state-s1    786M   131G      786M  -
oldpool/data/vm-107-disk-0     1.88G   131G     1.50G  -
oldpool/data/vm-107-disk-1      295M   131G      214M  -
oldpool/data/vm-107-state-d1    312M   131G      312M  -
oldpool/data/vm-107-state-s2    259M   131G      259M  -
oldpool/data/vm-109-disk-1     36.9M   131G     36.2M  -
oldpool/data/vm-109-state-s1   79.8M   131G     79.8M  -
oldpool/data/vm-110-disk-0     13.5G   131G     13.5G  -
oldpool/data/vm-115-disk-0     18.2G   131G     18.2G  -
oldpool/data/vm-115-disk-1     18.1G   131G     18.1G  -
oldpool/data/vm-116-disk-0     12.1G   131G     8.74G  -
oldpool/data/vm-116-state-s1   1.14G   131G     1.14G  -
oldpool/data/vm-117-disk-0     93.1G   131G     48.9G  -
oldpool/data/vm-117-state-s15  5.37G   131G     5.37G  -
oldpool/data/vm-117-state-s16  1.98G   131G     1.98G  -
oldpool/data/vm-117-state-s17  5.93G   131G     5.93G  -
oldpool/data/vm-120-disk-0     15.3G   131G     15.3G  -
oldpool/data/vm-121-disk-0     28.7M   131G     28.4M  -
oldpool/data/vm-121-disk-1     2.22G   131G     1.56G  -
oldpool/data/vm-124-disk-0     15.9G   131G     10.5G  -
oldpool/data/vm-124-state-s1   2.88G   131G     2.88G  -
oldpool/data/vm-124-state-s2   2.70G   131G     2.70G  -
oldpool/data/vm-124-state-s3   2.58G   131G     2.58G  -
oldpool/data/vm-124-state-s4   3.42G   131G     3.42G  -
oldpool/data/vm-127-disk-0     27.3G   131G     27.3G  -
oldpool/data/vm-127-disk-1       76K   131G       76K  -
oldpool/data/vm-128-disk-0     4.19G   131G     3.72G  -
oldpool/data/vm-128-state-S2   1.91G   131G     1.91G  -
oldpool/data/vm-128-state-s1   1.38G   131G     1.38G  -
oldpool/vm-102-disk-0           206G   284G     52.8G  -
rpool                          1.28G   227G      104K  /rpool
rpool/ROOT                     1.24G   227G       96K  /rpool/ROOT
rpool/ROOT/pve-1               1.24G   227G     1.24G  /
rpool/data                       96K   227G       96K  /rpool/data
 
There are two thinks your VMs consist of:
1.) the zvols your VMs use as virtual disks (these you got on your "oldpool")
2.) the config files defining what architecture, drives, protocols, devices, ... your VM should use. These were stored in the "/etc/pve/qemu-server" and "/etc/pve/lxc" folders on your lost system disks.

So you bascially only got the virtual disks but lost all the VM configurations. I hope you made backups of your guests (these contain configs + virtual disks) or atleast the hosts "/etc" folder regularily, because otherwise you can only create new config files from scratch based on your memories. And if you remember stuff wrong your VMs might not be able to boot.

Btw: How do you damage 3 out of 4 disks at the same time?
 
Last edited:
Maybe you got me wrong. One day my Proxmox stopped working. I had 2 256MB disks in Raid 1 and 2 1TB disks also in Raid 1. The system disk and one with data (there was, among others, VM, ISO) just stopped working.
I installed the system on two new disks and added two new 2 TB disks, on which I restored VM copies, etc.
One 1 TB disk, which was saved, I connected to the server and I did not see any VM in the GUI, and as I wrote earlier, I see these VMs in the console and I do not know how to get them out of there.
Responding to your suggestion, this 1T disk is actually the disk where all VMs were installed.
Proxmox shows the occupied space in the same size as before the crash.
Hope my problem is clearer now
 
You won't see any VMs in die GUI and won'T be able to start any LXC/VM, no matter if you copy over the virtual disks or not, if you don't have the corresponding config files in "/etc/pve/qemu-server" and "/etc/pve/lxc" folders. And you only have access to the old "/etc/pve" folder when booting your PVE with a working pve-cluster service running. Because everything in /etc/pve/ aren't normal files/folders. Its just the mountpoint of a PMXCFS filesystem and whats mounted there is actually stored in a SQLite DB. See here: https://pve.proxmox.com/wiki/Proxmox_Cluster_File_System_(pmxcfs)
 
As I understand correctly, if I had the system on one disk (256GB) and the VM on the other (1TB) and the system disk (256GB) is damaged, I will not start the VM in any way?

If I recovered the contents of the system disk (256GB), what do I have to do to be able to copy the VM from dsku (1TB)?
 
As I understand correctly, if I had the system on one disk (256GB) and the VM on the other (1TB) and the system disk (256GB) is damaged, I will not start the VM in any way?
Jup. If you got no backups of your config files stored in "/etc/pve" (or Vzdump/PBS backups of your entire guests) you can just try to create new config files based on what you remember what the original config files looked like. All you got on your 1TB disks are virtual disks. Everything else a virtual computer consists of (CPU, RAM, disk controller, BIOS/UEFI, chipset, GPU, NIC, protocols, intruction sets, chipset, ...) was stored in the config files on you lost 256GB disks.
You can use the search function to see how other people handled this. Each month someone is panicing and asks here how to rescue the VMs after the system disk died, because they didn't cared about creating backups regularily.
If I recovered the contents of the system disk (256GB), what do I have to do to be able to copy the VM from dsku (1TB)?
You can try to recover the contents of your /etc/pve folder. But if you boot from a live linux or you put the disks in another machine, all you will find there is a empty folder. See here: https://pve.proxmox.com/wiki/Proxmox_Cluster_File_System_(pmxcfs)
 
Last edited:
Thank you for your response. You showed me the way and I found copies of the VM files (/etc/pve/nodes/proxmox1/qemu-server) on the second server (because both were in Claster) and here I have a question.
For example, I want to restore from a copy of VM 101.conf on a new server or if I just copy this file to the new server to the same path?
What to do if there is already a file with the same name in the new location?
Anything else I need to do?
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!