I've been having on and off issues with this and there was one common denominator, I added additional ram around December.
Proxmox is running on an Asus B450-F motherboard, non-ecc ram, 4 x 16GB sticks.
Memtest errored within an hour of testing all four sticks, but 24 hours of testing each...
I've been running my current proxmox server for about 6 months now and it's been fine, but recently running into a couple of crashes.
I'm running several containers and a few VM's, including a Windows 10 VM. The crash only appears to happen when I log into remote desktop on the Windows VM, the...
Thanks. So originally, there were only two folders under those volumes when not mounted. Are they just 'rubbish folders', I just need to delete them and it should all be fine when I remove the overlay command?
Thank you for the help. I followed the steps from your post there, but it didn't work, the zpools still won't mount.
I get the follwoing from journalctl -b
Jul 12 12:40:28 pve systemd[1]: zfs-mount.service: Main process exited, code=exited, status=1/FAILURE
Jul 12 12:40:28 pve systemd[1]...
zfs get all 100:
NAME PROPERTY VALUE SOURCE
vm/subvol-100-disk-0 type filesystem -
vm/subvol-100-disk-0 creation Fri Jul 10 14:13 2020 -
vm/subvol-100-disk-0 used 363M...
Just FYI, it;s all the containers except 104 that aren't working, so I've done a zfs get all for 100 and 104.
zfs get all 104:
NAME PROPERTY VALUE SOURCE
vm/subvol-104-disk-0 type filesystem -
vm/subvol-104-disk-0...
Hello,
I appear to have an issue where permissions are changing on a reboot on a zfs volume. Volume is an SSD as zfs. All the container subvolumes below except 104 were created this morning the permissions somehow changed from 100000 to root and 700. After this, they will no longer start. These...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.