I have a Proxmox VE 7.2-7 setup. I have setup NFS storage and added a new backup plan. I found two retention settings in both Storage and in the Backup I created. I wonder what's the relationship will be between them? Thanks.
@moonman I'm surprised no one else seems to be complaining this issue, not even anyone is talking about it. Are we the only two users of LXC on the whole internet? This in my opinion should be a major issue and everyone should jump and shout, I expected.
Not sure, but hopefully and likely next few versions, if not next one version. I have filed bug in both the systemd Github project and systemd package in Arch Linux website.
I'm using Arch as containers and problem solved by:
yay -U /var/cache/pacman/pkg/systemd-libs-244-1-x86\_64.pkg.tar.xz /var/cache/pacman/pkg/systemd-244-1-x86\_64.pkg.tar.xz
I use pvesh get /nodes/us000/openvz and pvesh get /nodes/us000/qemu to get OpenVZ and KVM vm status. It seems 0 is always returned for the KVM CPU usage, while reasonable values are returned for OpenVZ. So do I need to do anything to turn on KVM CPU usage for the pvesh command? Thanks.
The version of Proxmox in concern is 3.4.11. I want to setup some IPv6 only OpenVZ containers where IPv4 is not available for CTs. I found it's not possible to put in IPv6 address as name server in the Proxmox web ui. Is there any way I can use IPv6 addresses as the CT's name servers?
I'm playing with Hurricane Electric's IPv6 tunnelbroker and am running Proxmox with a hardware node and multiple vms. Now I managed to allocate IPv6 addresses from my /64 pool to the vms. Here is how I did it:
On Hardware Node:
auto he-ipv6
iface he-ipv6 inet6 v4tunnel...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.