This is still a bug and not fixed. i have an entire node offline since it cannot start any LXC Containers. Can someone look into this?
It seems to be cause I use zfs and its not mounted properly I think?
Tried a few things and below are the results. Seems maybe the zfs setup is bugged?
root@prox2:~# pct mount 102
mounting container failed
cannot open directory //rpool/data/subvol-102-disk-1: No such file or directory
root@prox2:~# pct mount 102^C
root@prox2:~# ^C
root@prox2:~# zfs list
NAME...
I get the below error message when trying to backup an lxc container. its a tiny one. I have 2 other lxc that backup fine, and 3 other KVM Vm's that backup fine. Some guidance is appreciated. The permission died error makes no sense as its writing the log files to it fine.
Task viewer...
Playing around, I was able to wipe and reinstall zfs raid 1 fine on the SSD's, but no matter what I did to the 4x4TB drives, be it zpool clearlabel etc, or even formatting them on windows... they wont work.
I was able to create a raid10 zfs manually once i booted off the raid1 ssd's
I am...
I'm trying a ZFS Raid1 and its working. Put it on two ssd's and gonna have the zfs raid10 as a datastore only.
As I've seen on this forum, its something to do with RAID10 not wanting to install.
This server ran 4.4 fine. I killed it and 10 minutes later tried to install 5.1 with these errors
I'll try the 4.4 install tomorrow to verify it still works.
Setup a zfs raid6 on 4x1TB drives on Proxmox 4.x, 2 were replaced with 2TB drives. I wiped this server now and am now trying to install with 4x2TB Drives (2 of those from the previous install.
I am having issues and it errors out when its done with unable to unmount zfs, I talked on the IRC...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.