Moving PBS from VM to LXC - Mounting issues

May 2, 2024
4
0
1
Hi there,

I've been using PBS as a VM in one of my nodes for a while, and I would like to move it to an LXC for better resource management.

For the VM, I managed the passthrough without much of an issue, and then setup the drive in the PBS GUI itself. I am now a bit baffled as to how I should proceed for mounting / passthrough to the LXC.

Specifically, I intended to mount the same drive as the one I used for the PBS VM directly on the Proxmox Host, and then pass it through to the LXC using mount points. However, I am stumbling at the first step, as I can't seem to be able to drive the mount to the host.
I added a rather standard fstab entry to mount it, but I get the following error message:

Code:
EXT4-fs (nvme0n1p1): VFS: Can't find ext4 filesystem
I understand is to mean that the partition is not seen as an ext4 one.

Code:
lsblk -f
doesn't show me any fstype (or UUID for that matter), however,

Code:
fsck -N /dev/nvme0n1p1
gives me
Code:
fsck.ext4
as output.

With this, what are my options to passthrough this drive to the LXC?

Thanks!
 
Hi,
how was the disk formatted? Do you maybe have an LVM? Check the output of pvs and lvs.
 
Hi,
Thanks, the drive was setup in the PBS VM itself, with the "Initialize Disk with GPT" button. Neither pvs and lvs show the drive.

I also note that in the PBS VM, the drives are indicated as xfs, but neither ext4 nor xfs work for the fstab configuration. Both give an error message, either
Code:
EXT4-fs (nvme0n1p1): VFS: Can't find ext4 filesystem
or
Code:
XFS (nvme0n1p1): Invalid superblock magic number
 
Hi,
Thanks, the drive was setup in the PBS VM itself, with the "Initialize Disk with GPT" button. Neither pvs and lvs show the drive.

I also note that in the PBS VM, the drives are indicated as xfs, but neither ext4 nor xfs work for the fstab configuration. Both give an error message, either
Code:
EXT4-fs (nvme0n1p1): VFS: Can't find ext4 filesystem
or
Code:
XFS (nvme0n1p1): Invalid superblock magic number
Please post the output of lsblk -o +FSTYPE
 
Here is the output on the pve host (where I'm trying to mount the drive):
Code:
sda                            8:0    0 931.5G  0 disk             
├─sda1                         8:1    0  1007K  0 part             
├─sda2                         8:2    0     1G  0 part /boot/efi   vfat
└─sda3                         8:3    0 930.5G  0 part             LVM2_member
  ├─pve-swap                 252:0    0     8G  0 lvm  [SWAP]      swap
  ├─pve-root                 252:1    0    96G  0 lvm  /           ext4
  ├─pve-data_tmeta           252:2    0   8.1G  0 lvm             
  │ └─pve-data-tpool         252:4    0 794.3G  0 lvm             
  │   ├─pve-data             252:5    0 794.3G  1 lvm             
  │   ├─pve-vm--107--disk--0 252:6    0     4G  0 lvm              ext4
  │   ├─pve-vm--106--disk--0 252:7    0     4G  0 lvm              ext4
  │   ├─pve-vm--104--disk--0 252:8    0   100G  0 lvm             
  │   ├─pve-vm--102--disk--0 252:10   0    32G  0 lvm             
  │   ├─pve-vm--110--disk--0 252:11   0    10G  0 lvm              ext4
  │   └─pve-vm--103--disk--0 252:15   0    32G  0 lvm             
  └─pve-data_tdata           252:3    0 794.3G  0 lvm             
    └─pve-data-tpool         252:4    0 794.3G  0 lvm             
      ├─pve-data             252:5    0 794.3G  1 lvm             
      ├─pve-vm--107--disk--0 252:6    0     4G  0 lvm              ext4
      ├─pve-vm--106--disk--0 252:7    0     4G  0 lvm              ext4
      ├─pve-vm--104--disk--0 252:8    0   100G  0 lvm             
      ├─pve-vm--102--disk--0 252:10   0    32G  0 lvm             
      ├─pve-vm--110--disk--0 252:11   0    10G  0 lvm              ext4
      └─pve-vm--103--disk--0 252:15   0    32G  0 lvm             
sdb                            8:16   0 953.9G  0 disk             
└─sdb1                         8:17   0 953.9G  0 part             xfs
mmcblk0                      179:0    0  58.2G  0 disk             
mmcblk0boot0                 179:8    0     4M  1 disk             
mmcblk0boot1                 179:16   0     4M  1 disk             
nvme0n1                      259:0    0   1.8T  0 disk             
├─nvme0n1p1                  259:1    0 931.5G  0 part             
└─nvme0n1p2                  259:2    0 931.5G  0 part

The relevant disk is the last one (nvme0n1), both partitions (nvme0n1p1 and nvme0n1p2) are separately mounted to my PBS VM. I'm now trying to mount each of them to the proxmox host (I'm aware that I shouldn't try to mount nvme0n1 itself, that's not what I'm trying to do in the fstab)
 
I've been using PBS as a VM in one of my nodes for a while, and I would like to move it to an LXC for better resource management.
Not sure if it is officially supported and internally tested now, as Chris isn't complaing, but previously it wasn't recommended to run PBS in a LXC. Will work but was't intended to work. If there is still no internal testing and no one will check if an upgrade will break a PBS LXC, that wouldn't be a great option if you want to rely on your backups.
 
Here is the output on the pve host (where I'm trying to mount the drive):
Code:
sda                            8:0    0 931.5G  0 disk            
├─sda1                         8:1    0  1007K  0 part            
├─sda2                         8:2    0     1G  0 part /boot/efi   vfat
└─sda3                         8:3    0 930.5G  0 part             LVM2_member
  ├─pve-swap                 252:0    0     8G  0 lvm  [SWAP]      swap
  ├─pve-root                 252:1    0    96G  0 lvm  /           ext4
  ├─pve-data_tmeta           252:2    0   8.1G  0 lvm            
  │ └─pve-data-tpool         252:4    0 794.3G  0 lvm            
  │   ├─pve-data             252:5    0 794.3G  1 lvm            
  │   ├─pve-vm--107--disk--0 252:6    0     4G  0 lvm              ext4
  │   ├─pve-vm--106--disk--0 252:7    0     4G  0 lvm              ext4
  │   ├─pve-vm--104--disk--0 252:8    0   100G  0 lvm            
  │   ├─pve-vm--102--disk--0 252:10   0    32G  0 lvm            
  │   ├─pve-vm--110--disk--0 252:11   0    10G  0 lvm              ext4
  │   └─pve-vm--103--disk--0 252:15   0    32G  0 lvm            
  └─pve-data_tdata           252:3    0 794.3G  0 lvm            
    └─pve-data-tpool         252:4    0 794.3G  0 lvm            
      ├─pve-data             252:5    0 794.3G  1 lvm            
      ├─pve-vm--107--disk--0 252:6    0     4G  0 lvm              ext4
      ├─pve-vm--106--disk--0 252:7    0     4G  0 lvm              ext4
      ├─pve-vm--104--disk--0 252:8    0   100G  0 lvm            
      ├─pve-vm--102--disk--0 252:10   0    32G  0 lvm            
      ├─pve-vm--110--disk--0 252:11   0    10G  0 lvm              ext4
      └─pve-vm--103--disk--0 252:15   0    32G  0 lvm            
sdb                            8:16   0 953.9G  0 disk            
└─sdb1                         8:17   0 953.9G  0 part             xfs
mmcblk0                      179:0    0  58.2G  0 disk            
mmcblk0boot0                 179:8    0     4M  1 disk            
mmcblk0boot1                 179:16   0     4M  1 disk            
nvme0n1                      259:0    0   1.8T  0 disk            
├─nvme0n1p1                  259:1    0 931.5G  0 part            
└─nvme0n1p2                  259:2    0 931.5G  0 part

The relevant disk is the last one (nvme0n1), both partitions (nvme0n1p1 and nvme0n1p2) are separately mounted to my PBS VM. I'm now trying to mount each of them to the proxmox host (I'm aware that I shouldn't try to mount nvme0n1 itself, that's not what I'm trying to do in the fstab)
Did you pass trough the whole disk to the VM or the partitions separately? Maybe you did create another partition table?
 
I passed each partition through separately using
Code:
qm set 102 -scsi2 /dev/disk/by-id/*-part1
qm set 102 -scsi2 /dev/disk/by-id/*-part2

Did I create a separate partition table within each partition? I'm not sure how I could make it visible to the host then
 
Not sure if it is officially supported and internally tested now, as Chris isn't complaing, but previously it wasn't recommended to run PBS in a LXC. Will work but was't intended to work. If there is still no internal testing and no one will check if an upgrade will break a PBS LXC, that wouldn't be a great option if you want to rely on your backups.
Running PBS inside a VM/CT on the same host always comes at the cost of missing redundancy, if not covered by some other means e.g. remote sync job... When running PBS inside a container, it shares the kernel with the host, so this could potentially cause issue. But yes, such setups are not consistently tested on our side.

But in general, the recommended way is to have a dedicated host to use for PBS!
 
  • Like
Reactions: Dunuin
I passed each partition through separately using
Code:
qm set 102 -scsi2 /dev/disk/by-id/*-part1
qm set 102 -scsi2 /dev/disk/by-id/*-part2

Did I create a separate partition table within each partition? I'm not sure how I could make it visible to the host then
You could check if that is the case by running gdisk nvme0n1p1 -l, that should show you the partition if it is nested. You could use a loop device to access the filesystem on it, but I would recommend to sync your data to some other storage and redo the partitions correctly in that case.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!