Brand new install of Proxmox on a R720 PowerEdge, everything up-to-date.
2 x SSD as mirror:
- Proxmox
- TrueNAS VM
4 x SSD as RAIDZ2:
- containers
- VMs
In TrueNAS, I passed through the 4 SSDs (cannot HBA/LSI passthrough because I only have the one controller). Created a RAIDZ2 pool and some datasets.
Shared the datasets with SMB, mounted them to Proxmox (via Storage->Add->SMB/CIFS). Installed containers (Ubuntu) on the dataset, which boot up fine. However, if I shutdown the containers and try to start them again, I get the following:
If I run
they can boot up correctly. Although upon shutdown, the problem returns and the above error appears on startup. This is limited to containers, as VMs start/shutdown with no issues.
As a test I shared the same datasets with NFS, installed containers, and the problem does not appear (they can start/shutdown). Issue only appears when shared via SMB.
I would stick with NFS as a work-around, however I'd like to connect my Windows 10 Home PC to the shares (which requires SMB). Have spent many days on this, so am reaching out for help. Thanks!
2 x SSD as mirror:
- Proxmox
- TrueNAS VM
4 x SSD as RAIDZ2:
- containers
- VMs
In TrueNAS, I passed through the 4 SSDs (cannot HBA/LSI passthrough because I only have the one controller). Created a RAIDZ2 pool and some datasets.
Shared the datasets with SMB, mounted them to Proxmox (via Storage->Add->SMB/CIFS). Installed containers (Ubuntu) on the dataset, which boot up fine. However, if I shutdown the containers and try to start them again, I get the following:
Code:
Aug 29 13:20:40 pve pvedaemon[25712]: starting CT 102: UPID:pve:00006470:0002106A:64ED6408:vzstart:102:root@pam:
Aug 29 13:20:40 pve pvedaemon[2472]: <root@pam> starting task UPID:pve:00006470:0002106A:64ED6408:vzstart:102:root@pam:
Aug 29 13:20:41 pve systemd[1]: Started pve-container@102.service - PVE LXC Container: 102.
Aug 29 13:20:41 pve kernel: loop1: detected capacity change from 0 to 25165824
Aug 29 13:20:41 pve kernel: EXT4-fs warning (device loop1): ext4_multi_mount_protect:326: MMP interval 42 higher than expected, please wait.
Aug 29 13:21:26 pve kernel: I/O error, dev loop1, sector 0 op 0x1:(WRITE) flags 0x800 phys_seg 1 prio class 2
Aug 29 13:21:26 pve kernel: Buffer I/O error on dev loop1, logical block 0, lost sync page write
Aug 29 13:21:26 pve kernel: EXT4-fs (loop1): I/O error while writing superblock
Aug 29 13:21:26 pve kernel: EXT4-fs (loop1): mount failed
Aug 29 13:21:26 pve pvedaemon[25712]: startup for container '102' failed
Aug 29 13:21:26 pve pvestatd[2439]: unable to get PID for CT 102 (not running?)
Aug 29 13:21:26 pve pvedaemon[2472]: <root@pam> end task UPID:pve:00006470:0002106A:64ED6408:vzstart:102:root@pam: startup for container '102' failed
Aug 29 13:21:26 pve pvestatd[2439]: status update time (45.289 seconds)
Aug 29 13:21:27 pve pmxcfs[2302]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/pve/local: -1
Aug 29 13:21:27 pve pmxcfs[2302]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/pve/local-zfs: -1
Aug 29 13:21:27 pve pmxcfs[2302]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/pve/media: -1
Aug 29 13:21:27 pve pmxcfs[2302]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/pve/sucks: -1
Aug 29 13:21:27 pve systemd[1]: pve-container@102.service: Main process exited, code=exited, status=1/FAILURE
Aug 29 13:21:27 pve systemd[1]: pve-container@102.service: Failed with result 'exit-code'.
Aug 29 13:21:27 pve systemd[1]: pve-container@102.service: Consumed 1.463s CPU time.
Aug 29 13:21:31 pve kernel: CIFS: VFS: No writable handle in writepages rc=-9
If I run
Code:
pct fsck 102
As a test I shared the same datasets with NFS, installed containers, and the problem does not appear (they can start/shutdown). Issue only appears when shared via SMB.
I would stick with NFS as a work-around, however I'd like to connect my Windows 10 Home PC to the shares (which requires SMB). Have spent many days on this, so am reaching out for help. Thanks!
Last edited: