Thanks @fiona, the 2nd error was apparently if any features are set, the migration will fail. The failed migration also leaves the source LXC in a locked state. I was able to get this working by unsetting nesting/keyct features, then migrating, then re-setting on the new system.
one other note...
I could replicate this in 0.1.9
Application panicked!
Reason: panicked at /usr/share/cargo/registry/proxmox-yew-comp-0.3.6/src/rrd_graph_new.rs:637:41: index out of bounds: the len is 0 but the index is 0
when I tried to click on a graph that didn't seem to load.
IPv6AcceptRA seems to be auto-set to False when setting LXC containers to DHCP on network settings for IPv6. Is there a setting I can adjust to change this to True?
So I've selected xterm.js as my console default option in the cluster, and configured both the host and guest system to use it, which is working, yet when I select console via the sidebar, it still goes to novnc.
Is there a setting I'm missing?
I ran the upgrade from the web console, on 4 identical systems, same hardware, yet only one has this issue... very puzzling...
I am able to login via SSH without issue, only the webui console is missing the realm.
root@pve-2:~# pveversion -v
proxmox-ve: 5.1-42 (running kernel: 4.13.16-2-pve)...
After my last update, I can't login to the web console because the realm is blank and it requires a realm to login. Oddly 3 other servers with the exact same configuration did not have this issue post-update.
I am using a similar config based on https://pve.proxmox.com/wiki/Web_Interface_Via_Nginx_Proxy which is close to warinthestars and also am unable to get NoVNC working using a reverse proxy.
Hello,
I have a QNAP SAN with a specific folder full of ISO's. I configured this on my proxmox server and the NFS mount shows up, and the ISO's show up on disk on the pve server.
I can also see ISO's when I browse:
Yet when I attempt to create a new VM, and set the ISO:
I am unable to...