Big messup - lost nvme drive where proxmox is installed

dazed&confused

New Member
Sep 8, 2023
5
0
1
Hi all,

I had a perfectly-working proxmox, with 2 drives:

- a 500GB NVME drive onto which proxmox was installed (nvme0)
- a 128GB SATA SSD (sda)
- a 2TB USB external HDD for storage (sdb)

I wanted to pass the 128GB SSD through to a windows VM.

However, I accidentally passed through the wrong drive, picking the NVME drive from the dropdown menu. Obviously didn't work, so I removed it again, but now it seems PVE can't see the NVME drive at all. Strangely, I can still login into PVE, but cannot access the shell. I can, SSH into it though.

On trying to access the shell, I get "Connection failed (error 500: unable to open file 'var/tmp/pve-reserved-posts.tmp.239070' - Read only file system)

How can I fix this?


output of lsblk:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 119.2G 0 disk |-sda1 8:1 0 1007K 0 part |-sda2 8:2 0 512M 0 part `-sda3 8:3 0 118.7G 0 part sdb 8:16 0 1.8T 0 disk `-sdb1 8:17 0 1.8T 0 part

output of df -h: -bash: /usr/bin/df: Input/output error

output of lspci: -bash: /usr/bin/lspci: Input/output error

output of lsusb: -bash: /usr/bin/lsusb: Input/output error

output of mount: sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime) proc on /proc type proc (rw,relatime) udev on /dev type devtmpfs (rw,nosuid,relatime,size=8080692k,nr_inodes=2020173,mode=755,inode64) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) tmpfs on /run type tmpfs (rw,nosuid,nodev,noexec,relatime,size=1623340k,mode=755,inode64) /dev/mapper/pve-root on / type ext4 (ro,relatime,errors=remount-ro) securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,inode64) tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k,inode64) cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime) pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime) efivarfs on /sys/firmware/efi/efivars type efivarfs (rw,nosuid,nodev,noexec,relatime) bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700) systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=30,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=22845) mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime) hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M) debugfs on /sys/kernel/debug type debugfs (rw,nosuid,nodev,noexec,relatime) tracefs on /sys/kernel/tracing type tracefs (rw,nosuid,nodev,noexec,relatime) fusectl on /sys/fs/fuse/connections type fusectl (rw,nosuid,nodev,noexec,relatime) configfs on /sys/kernel/config type configfs (rw,nosuid,nodev,noexec,relatime) sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime) /dev/nvme0n1p2 on /boot/efi type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro) lxcfs on /var/lib/lxcfs type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other) /dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other) tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=1623340k,nr_inodes=405835,mode=700,inode64)

Output from vgcfrestore pve:
root@proxmox:~# vgcfgrestore pve Volume group pve has active volume: data. Volume group pve has active volume: data. Volume group pve has active volume: data_tdata. Volume group pve has active volume: data_tmeta. Volume group pve has active volume: root. Volume group pve has active volume: swap. Volume group pve has active volume: vm-100-disk-0. Volume group pve has active volume: vm-100-disk-1. Volume group pve has active volume: vm-100-state-Snapshot1. Volume group pve has active volume: vm-100-state-Snapshot_16-5-23. Volume group pve has active volume: vm-101-disk-0. Volume group pve has active volume: vm-101-disk-1. Volume group pve has active volume: vm-101-state-Snapshot1. Volume group pve has active volume: vm-103-disk-0. WARNING: Found 14 active volume(s) in volume group "pve". Restoring VG with active LVs, may cause mismatch with its metadata. Do you really want to proceed with restore of volume group "pve", while 14 volume(s) are active? [y/n]: y WARNING: Couldn't find device with uuid j4QiP4-p6Lz-AJ7K-2HYH-7cVm-Rnyy-0wUja0. Consider using option --force to restore Volume Group pve with thin volumes. Restore failed.

other commands tried: root@proxmox:~# lvscan root@proxmox:~# pvscan No matching physical volumes found root@proxmox:~# vgscan root@proxmox:~#

Any help greatly appreciated!
 
root@proxmox:~# fsck /dev/sad1 fsck from util-linux 2.36.1 fsck: /usr/sbin/fsck.ext2: execute failed: Input/output error

i also tried sda1 (instead of sad1)...

root@proxmox:~# fsck /dev/sda1 fsck from util-linux 2.36.1 fsck: /usr/sbin/fsck.ext2: execute failed: Input/output error

:(
 
Probably the boot partition work, but I think you lose the rest of yor disk.
You can try with a live distibution and try to mount the file system on the HDD, but probably I don't know a solution for this. Sorry.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!