Mounting drives after a reload

RCNunya

Renowned Member
Nov 17, 2011
22
0
66
I set this up long ago and have since forgotten everything.
My Proxmox machine started failing to boot. I tried a few repair option I found, but had no luck.
I re-installed the latest 8.4 over the old one and it is booing up now, but I am not able to get the other drives in the system to mount.
Proxmox runs off of an NVME, but I also have an SSD and an HD installed. The SSD ran some containers and the HD contains backups, so I was hoping I would be able to get them back up and going in short order after the reload.
The SSD and HD are xfs drives and when I try to mount, I get the error -
mount: /mnt/Storage: wrong fs type, bad option, bad superblock on /dev/sda, missing codepage or helper program, or other error.
dmesg(1) may have more information after failed mount system call.

Is this just an issue with the file system needing to be added to fstab or something else?

The file system for the nvme root and listed in fstab is ext4
 
I think I got them added, have not tried to restore the containers yet. I got the drives mounted and added to Storage as a directory, so they show up in the tree at least.
The result of your request shows -

lsblk -o+FSTYPE,LABEL and cat /etc/fstab
lsblk: and: not a block device
lsblk: cat: not a block device
lsblk: /etc/fstab: not a block device
 
Sorry sorry, got in a cut and paste hurry.

lsblk -o+FSTYPE,LABEL
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS FSTYPE LABEL
sda 8:0 0 3.6T 0 disk
└─sda1 8:1 0 3.6T 0 part /mnt/Storage xfs
sdb 8:16 0 465.8G 0 disk
└─sdb1 8:17 0 465.8G 0 part /mnt/SSD xfs
nvme0n1 259:0 0 931.5G 0 disk
├─nvme0n1p1 259:1 0 1007K 0 part
├─nvme0n1p2 259:2 0 1G 0 part /boot/efi vfat
└─nvme0n1p3 259:3 0 930.5G 0 part LVM2_member
├─pve-swap 252:0 0 8G 0 lvm [SWAP] swap
├─pve-root 252:1 0 96G 0 lvm / ext4
├─pve-data_tmeta 252:2 0 8.1G 0 lvm
│ └─pve-data-tpool 252:4 0 794.3G 0 lvm
│ ├─pve-data 252:5 0 794.3G 1 lvm
│ └─pve-vm--100--disk--0 252:6 0 100G 0 lvm
└─pve-data_tdata 252:3 0 794.3G 0 lvm
└─pve-data-tpool 252:4 0 794.3G 0 lvm
├─pve-data 252:5 0 794.3G 1 lvm
└─pve-vm--100--disk--0 252:6 0 100G 0 lvm


cat /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
UUID=6A59-98CA /boot/efi vfat defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
UUID=90db94ef-5f47-409e-8392-77539c04170f /mnt/Storage xfs defaults 0 0
UUID=af90fd83-9494-41b9-a5c3-354a026c794d /mnt/SSD xfs defaults 0 0

I think these parts are probably good now?

I got the main docker machine restored and running, it shows the docker containers up, but I can't connect. Probably a network address issue. I thought I had it set as a persistent from DHCP, but I could be wrong.
 
Code blocks would help immensely to make this more readable as formatting will be preserved.
I actually forgot to include the UUID column in lsblk but seeing that you already use UUIDs this looks good to me.
I'd add the nofail arg so when a drive goes missing again you can still easily boot. Changing their last 0 to 2 might also make sense so the file system will be checked. For example
Bash:
UUID=90db94ef-5f47-409e-8392-77539c04170f /mnt/Storage xfs defaults,nofail 0 2
For the other issue I'd need a bit more information. Perhaps creating another topic for this would be better.
I like to use dhclient -v do debug DHCP and ss -lntp to check listening ports. Maybe that helps.
 
Last edited: