Unable to start all LXC after reboot

Not sure if it's got to do with the root delay I placed for boot because the pools are encrypted.
ok - that's probably the source of the problem....

* how do you unlock the datasets?
* do you use native zfs encryption or is the pool on top of dm-crypt?
 
Last edited:
I've tried increasing the rootdelay to 30 and added these to /etc/default/zfs:
ZFS_INITRD_PRE_MOUNTROOT_SLEEP='5'
ZFS_INITRD_POST_MODPROBE_SLEEP='5'

Still no luck and no errors on reboot

Code:
# journalctl -b |grep -Ei 'zfs|container'
Sep 30 10:17:46 proxmox kernel: Command line: BOOT_IMAGE=/ROOT/pve-1@/boot/vmlinuz-5.4.60-1-pve root=ZFS=rpool/ROOT/pve-1 ro root=ZFS=rpool/ROOT/pve-1 boot=zfs rootdelay=30 quiet
Sep 30 10:17:46 proxmox kernel: Kernel command line: BOOT_IMAGE=/ROOT/pve-1@/boot/vmlinuz-5.4.60-1-pve root=ZFS=rpool/ROOT/pve-1 ro root=ZFS=rpool/ROOT/pve-1 boot=zfs rootdelay=30 quiet
Sep 30 10:17:46 proxmox kernel: ZFS: Loaded module v0.8.4-pve1, ZFS pool version 5000, ZFS filesystem version 5
Sep 30 10:17:50 proxmox systemd[1]: Starting Import ZFS pools by cache file...
Sep 30 10:17:53 proxmox systemd[1]: Started Import ZFS pools by cache file.
Sep 30 10:17:53 proxmox systemd[1]: Reached target ZFS pool import target.
Sep 30 10:17:53 proxmox systemd[1]: Starting Wait for ZFS Volume (zvol) links in /dev...
Sep 30 10:17:53 proxmox systemd[1]: Starting Mount ZFS filesystems...
Sep 30 10:17:53 proxmox systemd[1]: Started Wait for ZFS Volume (zvol) links in /dev.
Sep 30 10:17:53 proxmox systemd[1]: Reached target ZFS volumes are ready.
Sep 30 10:17:53 proxmox systemd[1]: Started Mount ZFS filesystems.
Sep 30 10:17:53 proxmox audit[3170]: AVC apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxc-container-default" pid=3170 comm="apparmor_parser"
Sep 30 10:17:53 proxmox audit[3170]: AVC apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxc-container-default-cgns" pid=3170 comm="apparmor_parser"
Sep 30 10:17:53 proxmox audit[3170]: AVC apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxc-container-default-with-mounting" pid=3170 comm="apparmor_parser"
Sep 30 10:17:53 proxmox audit[3170]: AVC apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxc-container-default-with-nesting" pid=3170 comm="apparmor_parser"
Sep 30 10:17:53 proxmox kernel: audit: type=1400 audit(1601432273.736:8): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxc-container-default" pid=3170 comm="apparmor_parser"
Sep 30 10:17:53 proxmox kernel: audit: type=1400 audit(1601432273.736:9): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxc-container-default-cgns" pid=3170 comm="apparmor_parser"
Sep 30 10:17:53 proxmox kernel: audit: type=1400 audit(1601432273.736:10): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxc-container-default-with-mounting" pid=3170 comm="apparmor_parser"
Sep 30 10:17:53 proxmox kernel: audit: type=1400 audit(1601432273.736:11): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxc-container-default-with-nesting" pid=3170 comm="apparmor_parser"
Sep 30 10:17:53 proxmox systemd[1]: Started ZFS Event Daemon (zed).
Sep 30 10:17:53 proxmox systemd[1]: Starting ZFS file system shares...
Sep 30 10:17:53 proxmox systemd[1]: Started ZFS file system shares.
Sep 30 10:17:53 proxmox systemd[1]: Reached target ZFS startup target.
Sep 30 10:17:54 proxmox zed[3212]: ZFS Event Daemon 0.8.4-pve1 (PID 3212)
Sep 30 10:17:54 proxmox systemd[1]: Started LXC Container Monitoring Daemon.
Sep 30 10:17:54 proxmox systemd[1]: Starting LXC Container Initialization and Autoboot Code...
Sep 30 10:17:54 proxmox audit[3431]: AVC apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="lxc-container-default" pid=3431 comm="apparmor_parser"
Sep 30 10:17:54 proxmox audit[3431]: AVC apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="lxc-container-default-cgns" pid=3431 comm="apparmor_parser"
Sep 30 10:17:54 proxmox audit[3431]: AVC apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="lxc-container-default-with-mounting" pid=3431 comm="apparmor_parser"
Sep 30 10:17:54 proxmox audit[3431]: AVC apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="lxc-container-default-with-nesting" pid=3431 comm="apparmor_parser"
Sep 30 10:17:54 proxmox systemd[1]: Started LXC Container Initialization and Autoboot Code.
Sep 30 10:18:00 proxmox systemd[1]: Created slice PVE LXC Container Slice.
Sep 30 10:18:00 proxmox systemd[1]: Started PVE LXC Container: 101.
Sep 30 10:18:01 proxmox pve-guests[3974]: startup for container '101' failed
Sep 30 10:18:01 proxmox pvesh[3925]: Starting CT 101 failed: startup for container '101' failed
Sep 30 10:18:01 proxmox systemd[1]: Started PVE LXC Container: 102.
Sep 30 10:18:02 proxmox systemd[1]: pve-container@101.service: Main process exited, code=exited, status=1/FAILURE
Sep 30 10:18:02 proxmox systemd[1]: pve-container@101.service: Failed with result 'exit-code'.
Sep 30 10:18:03 proxmox pve-guests[5420]: startup for container '102' failed
Sep 30 10:18:03 proxmox pvesh[3925]: Starting CT 102 failed: startup for container '102' failed
Sep 30 10:18:03 proxmox systemd[1]: Started PVE LXC Container: 103.
Sep 30 10:18:04 proxmox systemd[1]: pve-container@102.service: Main process exited, code=exited, status=1/FAILURE
Sep 30 10:18:04 proxmox systemd[1]: pve-container@102.service: Failed with result 'exit-code'.
Sep 30 10:18:05 proxmox pve-guests[5528]: startup for container '103' failed
Sep 30 10:18:05 proxmox pvesh[3925]: Starting CT 103 failed: startup for container '103' failed
Sep 30 10:18:05 proxmox systemd[1]: Started PVE LXC Container: 104.
Sep 30 10:18:06 proxmox systemd[1]: pve-container@103.service: Main process exited, code=exited, status=1/FAILURE
Sep 30 10:18:06 proxmox systemd[1]: pve-container@103.service: Failed with result 'exit-code'.
Sep 30 10:18:07 proxmox pve-guests[5629]: startup for container '104' failed
Sep 30 10:18:07 proxmox pvesh[3925]: Starting CT 104 failed: startup for container '104' failed
Sep 30 10:18:07 proxmox systemd[1]: Started PVE LXC Container: 105.
Sep 30 10:18:08 proxmox systemd[1]: pve-container@104.service: Main process exited, code=exited, status=1/FAILURE
Sep 30 10:18:08 proxmox systemd[1]: pve-container@104.service: Failed with result 'exit-code'.
Sep 30 10:18:09 proxmox pve-guests[5920]: startup for container '105' failed
Sep 30 10:18:09 proxmox pvesh[3925]: Starting CT 105 failed: startup for container '105' failed
Sep 30 10:18:09 proxmox systemd[1]: Started PVE LXC Container: 106.
Sep 30 10:18:10 proxmox systemd[1]: pve-container@105.service: Main process exited, code=exited, status=1/FAILURE
Sep 30 10:18:10 proxmox systemd[1]: pve-container@105.service: Failed with result 'exit-code'.
Sep 30 10:18:11 proxmox pve-guests[5995]: startup for container '106' failed
Sep 30 10:18:11 proxmox pvesh[3925]: Starting CT 106 failed: startup for container '106' failed
Sep 30 10:18:12 proxmox systemd[1]: pve-container@106.service: Main process exited, code=exited, status=1/FAILURE
Sep 30 10:18:12 proxmox systemd[1]: pve-container@106.service: Failed with result 'exit-code'.
 
I believe it's native zfs encryption with key on external drive.
the zfs get output of the container root-volume did not have any encryption properties set...

could you please post:
* the containers config ('/etc/pve/lxc/VMID.conf')
* your storage.cfg ('/etc/pve/storage.cfg')
* the properties of all datasets/mountpoints of the container (`zfs get all <POOLNAME>/<SUBVOL>`)
* describe how you unlock the encrypted datasets (systemd-service, the zfs mount generator ....)

Then I can try to reproduce the issue here locally
 
the zfs get output of the container root-volume did not have any encryption properties set...

could you please post:
* the containers config ('/etc/pve/lxc/VMID.conf')
* your storage.cfg ('/etc/pve/storage.cfg')
* the properties of all datasets/mountpoints of the container (`zfs get all <POOLNAME>/<SUBVOL>`)
* describe how you unlock the encrypted datasets (systemd-service, the zfs mount generator ....)

Then I can try to reproduce the issue here locally
I don't encrypt the root volume, only the HDD pool which is mounted to the CTs via mount point. I don't remember how it's unlocked, it's not systemd as I checked the directory. What are the options recommended on this forum?
 
I've just looked again at the HDD pool and I do notice that zfs get all lists encryption as off as well. However, I am very certain I generated a keyfile when creating the pool and did some steps to mount it on boot.
 
Regarding the encryption - a few options where the encryption could happen:
* if it's zfs native encryption - check all datasets on the path to the mountpoints of the container (although the encryption settings should also be listed on the container-dataset itself as well)
* Before zfs 0.8 (i.e. before PVE 6.0) there was no native ZFS encryption - the thing used for disk-encryption+ZFS on linux most commonly was to put a dm-crypt on top of your disk and then create the zpool on top of the dm-crypt

-> check and paste the output of:
* zpool status
* lsblk
* dmsetup ls --tree

also please provide the container config+zfs properties of the datasets of the containers + your storage.cfg - otherwise it's not really possible to help
 
Regarding the encryption - a few options where the encryption could happen:
* if it's zfs native encryption - check all datasets on the path to the mountpoints of the container (although the encryption settings should also be listed on the container-dataset itself as well)
* Before zfs 0.8 (i.e. before PVE 6.0) there was no native ZFS encryption - the thing used for disk-encryption+ZFS on linux most commonly was to put a dm-crypt on top of your disk and then create the zpool on top of the dm-crypt

-> check and paste the output of:
* zpool status
* lsblk
* dmsetup ls --tree

also please provide the container config+zfs properties of the datasets of the containers + your storage.cfg - otherwise it's not really possible to help

It's definitely native ZFS encryption. It was 0.82 when I set it up. I've checked the datasets of the main HDD zpool and you are right, the encryption level is on the dataset level and not the main pool.

Code:
#zfs get all HDD/backups
...
HDD/backups  encryption            aes-256-ccm                   -
HDD/backups  keylocation           file:///<key path>  local
HDD/backups  keyformat             raw                           -
HDD/backups  pbkdf2iters           0                             default
HDD/backups  encryptionroot        HDD/backups                   -
...

Code:
# zpool status
  pool: HDD
 state: ONLINE
  scan: scrub repaired 0B in 0 days 08:11:41 with 0 errors on Sun Sep 13 08:35:42 2020
config:

        NAME                        STATE     READ WRITE CKSUM
        HDD                         ONLINE       0     0     0
          raidz2-0                  ONLINE       0     0     0
            <HDD>  ONLINE       0     0     0
            <HDD>  ONLINE       0     0     0
            <HDD>  ONLINE       0     0     0
            <HDD>  ONLINE       0     0     0
            <HDD>  ONLINE       0     0     0
            <HDD>  ONLINE       0     0     0

errors: No known data errors

  pool: SSD
 state: ONLINE
  scan: scrub repaired 0B in 0 days 00:00:21 with 0 errors on Sun Sep 13 00:24:26 2020
config:

        NAME                        STATE     READ WRITE CKSUM
        SSD                         ONLINE       0     0     0
          raidz2-0                  ONLINE       0     0     0
            <SSD>  ONLINE       0     0     0
            <SSD>  ONLINE       0     0     0
            <SSD>  ONLINE       0     0     0
            <SSD>  ONLINE       0     0     0

errors: No known data errors

  pool: rpool
 state: ONLINE
  scan: scrub repaired 0B in 0 days 00:00:16 with 0 errors on Sun Sep 13 00:24:22 2020
config:

        NAME                                                   STATE     READ WRITE CKSUM
        rpool                                                  ONLINE       0     0     0
          mirror-0                                             ONLINE       0     0     0
            <SSD>  ONLINE       0     0     0
            <SSD>  ONLINE       0     0     0

errors: No known data errors

Code:
# lsblk
NAME    MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda       8:0    0  12.8T  0 disk
├─sda1    8:1    0  12.8T  0 part
└─sda9    8:9    0     8M  0 part
sdb       8:16   0  12.8T  0 disk
├─sdb1    8:17   0  12.8T  0 part
└─sdb9    8:25   0     8M  0 part
sdc       8:32   0  12.8T  0 disk
├─sdc1    8:33   0  12.8T  0 part
└─sdc9    8:41   0     8M  0 part
sdd       8:48   0  12.8T  0 disk
├─sdd1    8:49   0  12.8T  0 part
└─sdd9    8:57   0     8M  0 part
sde       8:64   0 372.6G  0 disk
├─sde1    8:65   0 372.6G  0 part
└─sde9    8:73   0     8M  0 part
sdf       8:80   0 372.6G  0 disk
├─sdf1    8:81   0 372.6G  0 part
└─sdf9    8:89   0     8M  0 part
sdg       8:96   0 372.6G  0 disk
├─sdg1    8:97   0 372.6G  0 part
└─sdg9    8:105  0     8M  0 part
sdh       8:112  0 372.6G  0 disk
├─sdh1    8:113  0 372.6G  0 part
└─sdh9    8:121  0     8M  0 part
sdi       8:128  0 111.8G  0 disk
├─sdi1    8:129  0  1007K  0 part
├─sdi2    8:130  0   512M  0 part
└─sdi3    8:131  0 111.3G  0 part
sdj       8:144  0 111.8G  0 disk
├─sdj1    8:145  0  1007K  0 part
├─sdj2    8:146  0   512M  0 part
└─sdj3    8:147  0 111.3G  0 part
sdk       8:160  0  12.8T  0 disk
├─sdk1    8:161  0  12.8T  0 part
└─sdk9    8:169  0     8M  0 part
sdl       8:176  0  12.8T  0 disk
├─sdl1    8:177  0  12.8T  0 part
└─sdl9    8:185  0     8M  0 part
nvme1n1 259:0    0 894.3G  0 disk
nvme0n1 259:1    0 894.3G  0 disk
nvme2n1 259:2    0 894.3G  0 disk
nvme3n1 259:3    0 894.3G  0 disk

Code:
# dmsetup ls --tree
No devices found

CT102 is one of the CTs that mount dataset /HDD/backups as /mnt/backups.

Code:
# cat /etc/pve/lxc/102.conf
arch: amd64
cores: 2
memory: 2048
mp0: /HDD/backups,mp=/mnt/backups
onboot: 1
ostype: debian
rootfs: SSD:subvol-102-disk-1,size=30G
swap: 2048
unprivileged: 1

Code:
# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content iso,backup,vztmpl
        maxfiles 2
        shared 0

zfspool: local-zfs
        pool rpool/data
        content rootdir,images
        sparse 1

zfspool: HDD
        pool HDD
        content rootdir
        mountpoint /HDD
        nodes proxmox
        sparse 1

zfspool: SSD
        pool SSD
        content rootdir,images
        mountpoint /SSD
        nodes proxmox
 
It's definitely native ZFS encryption. It was 0.82 when I set it up. I've checked the datasets of the main HDD zpool and you are right, the encryption level is on the dataset level and not the main pool.

* does the encryption key get loaded upon system boot? - if not make sure it does - this is quite well explained (with steps) in `man zfs-mount-generator`
* does the pool HDD get imported upon boot ? you need to have the pool imported so that the mount-generator units can unlock and mount HDD/backup - for this either set the cachefile property for HDD (zfs set cachefile=/etc/zfs/zpool.cache HDD (do the same for all pools in your system) - or remove the cachefile and enable `zfs-import-scan.service`

with all this in place the container should come up automatically on boot
 
* does the encryption key get loaded upon system boot? - if not make sure it does - this is quite well explained (with steps) in `man zfs-mount-generator`
* does the pool HDD get imported upon boot ? you need to have the pool imported so that the mount-generator units can unlock and mount HDD/backup - for this either set the cachefile property for HDD (zfs set cachefile=/etc/zfs/zpool.cache HDD (do the same for all pools in your system) - or remove the cachefile and enable `zfs-import-scan.service`

with all this in place the container should come up automatically on boot
I don't really understand the questions. After entering shell post boot I can see all the data in /HDD/<dataset> so it's definitely getting decrypted then mounted somewhere along the boot process.

I moved /etc/zfs/zpool.cache and enabled zfs-import-scan.service but I got some new logs which is what I got when I tried to mount them manually after boot before.

Code:
Oct 03 01:14:58 proxmox systemd[1]: Condition check resulted in Import ZFS pools by cache file being skipped.
Oct 03 01:14:58 proxmox systemd[1]: Starting Import ZFS pools by device scanning...
Oct 03 01:15:02 proxmox systemd[1]: Started Import ZFS pools by device scanning.
Oct 03 01:15:02 proxmox systemd[1]: Reached target ZFS pool import target.
Oct 03 01:15:02 proxmox systemd[1]: Starting Wait for ZFS Volume (zvol) links in /dev...
Oct 03 01:15:02 proxmox systemd[1]: Starting Mount ZFS filesystems...
Oct 03 01:15:02 proxmox zfs[3676]: cannot mount '/SSD': directory is not empty
Oct 03 01:15:02 proxmox systemd[1]: Started Wait for ZFS Volume (zvol) links in /dev.
Oct 03 01:15:02 proxmox systemd[1]: Reached target ZFS volumes are ready.
Oct 03 01:15:02 proxmox kernel: zfs[3691]: segfault at 0 ip 00007f056b730694 sp 00007f0569e59420 error 4 in libc-2.28.so[7f056b6d6000+148000]
Oct 03 01:15:02 proxmox kernel: zfs[3692]: segfault at 0 ip 00007f056b746554 sp 00007f0569658478 error 4 in libc-2.28.so[7f056b6d6000+148000]
Oct 03 01:15:02 proxmox systemd[1]: zfs-mount.service: Main process exited, code=killed, status=11/SEGV
Oct 03 01:15:02 proxmox systemd[1]: zfs-mount.service: Failed with result 'signal'.
Oct 03 01:15:02 proxmox systemd[1]: Failed to start Mount ZFS filesystems.

I think there's two problems here:
1. zfs is mounting HDD before it gets decrypted, hence the empty /mnt/<dataset> directories in the subvol-disks
2. zfs is mounting SSD after HDD so /mnt/<dataset> already has directories created?

With CT101 where there's no mount points to /HDD/<dataset>, the entire subvol disk is empty
Code:
# ls -l /SSD/subvol-101-disk-0/
total 0
With CT102 where /HDD/backups is mounted as mp0 to /mnt/backups, subvol-disk has empty /mnt/backups directory
Code:
# ls -l /SSD/subvol-102-disk-1/
total 1
drwxr-xr-x 5 100000 100000 5 Oct  3 01:15 mnt
# ls -l /SSD/subvol-102-disk-1/mnt/
total 1
drwxr-xr-x 2 100000 100000 2 Oct  3 01:15 backups
 
Seems there is still a problem with /SSD:
Oct 03 01:15:02 proxmox zfs[3676]: cannot mount '/SSD': directory is not empty

clean the directory and try rebooting - it needs to import and mount successfully first, before we can take a look at further steps
 
Seems there is still a problem with /SSD:


clean the directory and try rebooting - it needs to import and mount successfully first, before we can take a look at further steps
What do you mean clean the directory? Right now all the subvols are mounted as I have already used rm -rf on the empty directories and mounted them after. Do you mean dismount them and reboot?
 
What do you mean clean the directory?
From the logs:
Oct 03 01:15:02 proxmox systemd[1]: Started Import ZFS pools by device scanning.
Oct 03 01:15:02 proxmox systemd[1]: Reached target ZFS pool import target.
Oct 03 01:15:02 proxmox systemd[1]: Starting Wait for ZFS Volume (zvol) links in /dev...
Oct 03 01:15:02 proxmox systemd[1]: Starting Mount ZFS filesystems...
Oct 03 01:15:02 proxmox zfs[3676]: cannot mount '/SSD': directory is not empty
it looks like /SSD contains some leftover directories - hence the pool gets imported, but not dataset gets mounted...

* Try exporting the pool and checking that /SSD is indeed empty
* then rebooting

if this does not help - please check what files/directories are below /SSD (which prevent the mount of the datasets)
 
From the logs:

it looks like /SSD contains some leftover directories - hence the pool gets imported, but not dataset gets mounted...

* Try exporting the pool and checking that /SSD is indeed empty
* then rebooting

if this does not help - please check what files/directories are below /SSD (which prevent the mount of the datasets)
I believe "leftover directories" are the mountpoints that are mounted under /mnt which are datasets from /HDD. On reboot I have to rm -rf /mnt/<dataset> before being able to mount /SSD.
 
I believe "leftover directories" are the mountpoints that are mounted under /mnt which are datasets from /HDD. On reboot I have to rm -rf /mnt/<dataset> before being able to mount /SSD.
under mount of the container's dataset?
after you reboot (and have the 'cannot mount '/SSD': directory is not empty' message in the log) - please post the output of `find /SSD`
 
under mount of the container's dataset?
after you reboot (and have the 'cannot mount '/SSD': directory is not empty' message in the log) - please post the output of `find /SSD`
As you can see from above, CT102 mounts dataset /HDD/backups to /mnt/backups. When I reboot and see the directory is not empty message, it's because there exists empty directory /SSD/subvol-102-disk-1/mnt/backups. After I rm -rf this directory, /SSD can mount properly.
Maybe it is not a boot problem but the shutdown problem. Is it possible that /mnt/backups is not being removed properly when rebooting?
Code:
# cat /etc/pve/lxc/102.conf
arch: amd64
cores: 2
memory: 2048
mp0: /HDD/backups,mp=/mnt/backups
onboot: 1
ostype: debian
rootfs: SSD:subvol-102-disk-1,size=30G
swap: 2048
unprivileged: 1
 
Does the system come up clean - meaning:
* do both pools get imported and mounted (no error messages in the logs)

if you:
* disable the onboot of container 102 (and others with the bind mount)?
* if this works - if you enable onboot, but disable the bindmount (mp0)
 
Does the system come up clean - meaning:
* do both pools get imported and mounted (no error messages in the logs)

if you:
* disable the onboot of container 102 (and others with the bind mount)?
* if this works - if you enable onboot, but disable the bindmount (mp0)
I finally got a chance to restart the host. I removed every single onboot=true flag. After reboot, no VM/CTs are started.

However, /SSD is already mounted with all the subvols but they are completely empty (not even the empty /mnt directories are created like if onboot=true).

I cannot do zfs mount SSD (directory is not empty). I can however mount every subvol by hand, manually (zfs mount SSD/subvol-102-disk-1 etc.)
 
However, /SSD is already mounted with all the subvols but they are completely empty (not even the empty /mnt directories are created like if onboot=true).
my guess is that's a leftover from a previous boot (the subvol-dirs got created - thus the subvols cannot be mounted)

after reboot (without any containers starting):
* check whether the pool SSD is imported (does it show up in the `zpool status`
* if yes - is /SSD mounted (`zfs get all SSD |grep mount`)
** if yes - are any of the subvols mounted (`zfs get all SSD/subvol-102-disk-1 |grep mount`)
** if yes - is there any content inside?
* if no - are there any (empty) directories in /SSD?
** if yes - `rmdir` them and reboot

I hope this helps!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!