I've had some issues with my Proxmox 5.1-41 install.
I had issues with ZFS where things weren't mounting, and the WebGUI wasn't working - which turned out to be a badly configured /etc/hosts file.
Once this was fixed, things worked, however now I can't get containers working.
during my fiddling to fix the issue, i think i must have stuffed up something with ZFS and how/where it mounts.
Now when i try to create a new container, on the last 'confirm' screen it has the following error:
here is the output of zfs list:
pvesm status gives:
if I manually edit my container /etc/pve/lxc/102.conf file and change the rootfs to match the MOUNTPOINT, it works fine - i.e.:
/etc/pve/lxc/102.conf below now works:
but prior to this issue, rootfs was listed as:
which was working. But now I have to enter the exact mountpoint.
I have a feeling this issue is similar to what is outlined by Greg here: https://forum.proxmox.com/threads/cannot-restart-container.35869/
Is there some way I can re-configure proxmox to create containers with the current mountpoints?
Or otherwise get things back to how they should be?
Otherwise from here I can't create new containers from the Web GUI, and would need to do so by hand and explicitly state the rootfs.
VM's are working just fine., including the creation of new VM's. This issue is just with containers.
Any assistance greatly appreciated.
Seb
I had issues with ZFS where things weren't mounting, and the WebGUI wasn't working - which turned out to be a badly configured /etc/hosts file.
Once this was fixed, things worked, however now I can't get containers working.
during my fiddling to fix the issue, i think i must have stuffed up something with ZFS and how/where it mounts.
Now when i try to create a new container, on the last 'confirm' screen it has the following error:
Code:
mounting container failed
TASK ERROR: cannot open directory //rpool: No such file or directory
here is the output of zfs list:
Code:
NAME USED AVAIL REFER MOUNTPOINT
rpool 83.3G 2.66T 96K /
rpool/ROOT 4.33G 2.66T 96K /ROOT
rpool/ROOT/pve-1 4.33G 2.66T 4.33G /
rpool/data 70.5G 2.66T 96K /data
rpool/data/subvol-102-disk-1 1.70G 48.3G 1.70G /data/subvol-102-disk-1
rpool/data/vm-100-disk-1 1.14G 2.66T 1.14G -
rpool/data/vm-101-disk-1 2.21G 2.66T 2.21G -
rpool/data/vm-101-disk-2 65.4G 2.66T 65.4G -
rpool/swap 8.50G 2.67T 56K -
pvesm status gives:
Code:
Name Type Status Total Used Available %
local dir active 2857919744 4542720 2853377024 0.16%
local-zfs zfspool active 2927294988 73917920 2853377068 2.53%
if I manually edit my container /etc/pve/lxc/102.conf file and change the rootfs to match the MOUNTPOINT, it works fine - i.e.:
/etc/pve/lxc/102.conf below now works:
Code:
arch: amd64
cores: 2
cpulimit: 2
hostname: plex
memory: 8192
nameserver: 8.8.8.8 1.1.1.1
net0: name=eth0,bridge=vmbr0,hwaddr=36:AC:BC:B6:0B:44,ip=dhcp,type=veth
onboot: 1
ostype: archlinux
rootfs: /data/subvol-102-disk-1,size=50G
searchdomain: seb
startup: order=3
swap: 4096
lxc.hook.autodev: /var/lib/lxc/102/tuntap
lxc.cgroup.devices.allow: c 10:200 rwm
but prior to this issue, rootfs was listed as:
Code:
rootfs: local-zfs:subvol-102-disk-1,size=50G
I have a feeling this issue is similar to what is outlined by Greg here: https://forum.proxmox.com/threads/cannot-restart-container.35869/
Is there some way I can re-configure proxmox to create containers with the current mountpoints?
Or otherwise get things back to how they should be?
Otherwise from here I can't create new containers from the Web GUI, and would need to do so by hand and explicitly state the rootfs.
VM's are working just fine., including the creation of new VM's. This issue is just with containers.
Any assistance greatly appreciated.
Seb