problems with ZFS - Urgent help

indianetwork

New Member
Apr 5, 2021
5
0
1
34
Hi, we have proxmox server configured with 1 TB / 2hard drives. It has been running great with 5 VMs on it until I restarted. Now all the VMS lost their hard drive. I can still log in to server but none of the VMS working. The server seems to have been configure zfs.
ls -l /dev/disk/by-id/
total 0
lrwxrwxrwx 1 root root 9 Apr 5 15:41 ata-Patriot_P200_1TB_AA000000000000000359 -> ../../sdb
lrwxrwxrwx 1 root root 10 Apr 5 15:41 ata-Patriot_P200_1TB_AA000000000000000359-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 10 Apr 5 15:41 ata-Patriot_P200_1TB_AA000000000000000359-part2 -> ../../sdb2
lrwxrwxrwx 1 root root 10 Apr 5 15:41 ata-Patriot_P200_1TB_AA000000000000000359-part3 -> ../../sdb3
lrwxrwxrwx 1 root root 9 Apr 5 15:41 ata-Patriot_P200_1TB_AA000000000000000511 -> ../../sda
lrwxrwxrwx 1 root root 10 Apr 5 15:41 ata-Patriot_P200_1TB_AA000000000000000511-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Apr 5 15:41 ata-Patriot_P200_1TB_AA000000000000000511-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Apr 5 15:41 ata-Patriot_P200_1TB_AA000000000000000511-part3 -> ../../sda3
lrwxrwxrwx 1 root root 12 Apr 5 15:41 lvm-pv-uuid-bDwkZ8-4stS-Y9YK-t6f9-MBS1-3P0n-aC288S -> ../../zd80p2
lrwxrwxrwx 1 root root 12 Apr 5 15:41 lvm-pv-uuid-cTK3Fy-0IgA-rffA-3tip-iY1w-6zle-eZ96Jo -> ../../zd16p2
lrwxrwxrwx 1 root root 12 Apr 5 15:41 lvm-pv-uuid-vVgfDD-YXQ0-bVQm-MDss-teas-2Wrh-Gwcp9j -> ../../zd96p2

zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 379G 544G 96K /rpool
rpool/ROOT 216G 544G 96K /rpool/ROOT
rpool/ROOT/pve-1 216G 544G 61.2G /
rpool/ROOT/pve-1/vm-100-disk-0 155G 698G 56K -
rpool/ROOT/pve-1/vm-104-disk-0 56K 544G 56K -
rpool/ROOT/pve-1/vm-105-disk-0 56K 544G 56K -
rpool/ROOT/pve-1/vm-106-disk-0 56K 544G 56K -
rpool/data 162G 544G 96K /rpool/data
rpool/data/vm-100-disk-0 61.0G 544G 61.0G -
rpool/data/vm-101-disk-0 54.7G 544G 54.7G -
rpool/data/vm-103-disk-0 16.1G 544G 16.1G -
rpool/data/vm-104-disk-0 56K 544G 56K -
rpool/data/vm-104-disk-1 2.63G 544G 2.63G -
rpool/data/vm-105-disk-0 13.9G 544G 13.9G -
rpool/data/vm-106-disk-0 14.2G 544G 14.2G -
--------------------------------------------------------------------------
How can I attach the hard drive to each of the VMs? Appreciate any help in the right direction. Sam
 
not sure if it is related but did you do an update?

I updated yesterday , then rebooted and now my ZFS's are dead.

:(
Damon
 
* What's the output of `zpool status` (please use code-tags for commandline output)?
* What's the precise error message you get when trying to start a VM?
* Is there anything in the system journal which might point to the issue?

I hope this helps!
 
pool: rpool
state: ONLINE
scan: scrub repaired 0B in 0 days 00:16:02 with 0 errors on Sun Mar 14 00:40:03 2021
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ata-Patriot_P200_1TB_AA000000000000000511-part3 ONLINE 0 0 0
ata-Patriot_P200_1TB_AA000000000000000359-part3 ONLINE 0 0 0

errors: No known data errors

I had 6 VMs running before upgrade and hard drive missing for all of them. They fail to start. Proxmox server itself runs ok but none of the VMs are running.
Shows boot disk size 0
 
run in terminal:
Bash:
pvesm status
qm list
qm config 100
qm config 101
 
Last edited:
root@ns:/# qm list
VMID NAME STATUS MEM(MB) BOOTDISK(GB) PID
100 Blueonx stopped 16384 0.00 0
101 Win10 stopped 4096 0.00 0
102 BYX2 running 4096 100.00 32706
103 Listserv stopped 4096 0.00 0
104 BitnamiNodeJs stopped 2048 0.00 0
105 mysqlapi stopped 2094 0.00 0
106 CyberPanel stopped 4096 150.00 0

root@ns:/# qm config 100
agent: 1
boot: ncd
bootdisk: scsi1
cores: 2
ide2: none,media=cdrom
memory: 16384
name: Blueonx
net0: virtio=A6:96:EE:09:AB:2D,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsihw: virtio-scsi-single
smbios1: uuid=d09091f2-6abf-40f8-998e-9b7dec7d2475
sockets: 1
vmgenid: f1477a07-60f8-485b-bb68-361feadfe2eb
 
sda 8:0 0 953.9G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 512M 0 part
└─sda3 8:3 0 953.4G 0 part
sdb 8:16 0 953.9G 0 disk
├─sdb1 8:17 0 1007K 0 part
├─sdb2 8:18 0 512M 0 part
└─sdb3 8:19 0 953.4G 0 part
zd0 230:0 0 32G 0 disk
zd16 230:16 0 100G 0 disk
zd32 230:32 0 200G 0 disk
├─zd32p1 230:33 0 50M 0 part
├─zd32p2 230:34 0 199.5G 0 part
└─zd32p3 230:35 0 505M 0 part
zd48 230:48 0 100G 0 disk
├─zd48p1 230:49 0 1G 0 part
└─zd48p2 230:50 0 99G 0 part
zd64 230:64 0 14.9G 0 disk
└─zd64p1 230:65 0 14.9G 0 part
zd80 230:80 0 50G 0 disk
├─zd80p1 230:81 0 1M 0 part
└─zd80p2 230:82 0 50G 0 part
zd96 230:96 0 32G 0 disk
zd112 230:112 0 100G 0 disk
├─zd112p1 230:113 0 1G 0 part
└─zd112p2 230:114 0 99G 0 part
zd128 230:128 0 250G 0 disk
├─zd128p1 230:129 0 800M 0 part
└─zd128p2 230:130 0 249.2G 0 part
zd144 230:144 0 100G 0 disk
├─zd144p1 230:145 0 800M 0 part
└─zd144p2 230:146 0 99.2G 0 part
zd160 230:160 0 50G 0 disk
zd176 230:176 0 150G 0 disk
zd192 230:192 0 150G 0 disk
zd208 230:208 0 150G 0 disk
 
Is there anyone who had same problem with upgrade. It seems all data is there but I do not know how to link the zd drives to vms. If you have done before, please help.
 
Did you get anywhere, what about error messages when attempting to start?

In the GUI hardware field for one of the VM's does it show for anything for hard disks?

qm rescan

check /etc/pve/qemu-server/
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!