Thanks Stoiko. While I have not gone in detail yet through the pve8 documentation, I see that both rocky8 and centos7 were supported in pve7 and pve9 will not support legacy cgroups controller, but I am a bit lost so far with pve8, can you point me to any specific/already known issues or areas I...
Hi Victor, thanks for your suggestions.
I know i can upgrade from 6,to 7 and then 8, but in my tests the upgrade process from 6 to 7 has proven to be unreliable in a small percentage of the test, but not so small to discard the risk for the production environment. I am running a 8 server...
Hi
I am going to migrate from proxmox6 to proxmox8.1 by installing a new cluster and re-provision all my lxc servers and Kvms in a kind-of blue/green upgrade. I am using ansible to create lxc and KVM so provision is simple and fast (well, not so fast, but it is automatic) , but I am not finding...
Just in case it helps to discard something, I am running proxmox6 with Centos 7 and Rocky8 containers without a problem. I also have the setting in grub (systemd.unified_cgroup_hierarchy=0) in order to allow compatibility when upgrade to pve7 (maybe some day I will consider it stable enough!)
@jpancoast would you mind sharing the packer configuration for creating the container? There are some examples with isos, but not so many with containers
Hi, just writing this in case it helps anyone else.
Suddenly all the lxc containers of a node in a cluster failed to start, the only clues were as follows
root@proxmox-2:/var/log# pct start 129
setup_resource_limits: 2517 Unknown resource
lxc_spawn: 1813 Failed to setup resource limits...
Long story short stoping (yes, stopping) the services in one node, fixed the problem in all the nodes. I do not know how or why, but
systemctl stop pve-cluster && systemctl stop corosync && systemctl stop pvedaemon && systemctl stop pveproxy && systemctl stop pvestatd
and everything worked...
I have a 9 node production cluster based on pve6 (pve-manager/6.4-13/9f411e79 (running kernel: 5.4.143-1-pve)). The older server in the cluster have been up & running 563 days while the newest is 201 days so far. 2 days ago all servers in the cluster become grayed out, but I was able to access...
Hi
I am writing this here in case it can help someone else or, worst case scenario, a future me.
I have a 5 node cluster (pve6) with two corosync rings to avoid losing nodes or the cluster due to network issues, so it came as a total surprise that one of the nodes suddenly appeared as greyed...
Hi! i have the same situation, I see a lot of flapping in my corosync interfaces. What I have found is that using corosync in bonded interfaces, in my case double bonding (linux HA bonding over two lacp bonds to different switches and racks ) is actually causing this flapping, actually without...
Same happened here. I am doing some initial test to migrate our environments, and i had to do exactly the same.
Regarding this
cat /etc/default/grub.d/proxmox-ve.cfg
This file does not exists on any on my servers; 4 clusters updated to pve6.4-15, but the file is not there.
Should i create...
it seems I still need a shared storage between the two nodes to be able to restore backup stored in one server into another. I can manage that, thanks anyway.
Thanks Fiona, i have gone through it and it works perfectly. I was expecting this to happen automatically somehow, but its quickly fixed. I’d suggest though some note or comment in the documentation when adding a node to the cluster.
My problem now however is different, it seems i can not...
Hi
I have an existing proxmox 6.4 node with containers and vms which I have converted to a cluster to add a new node. This original node is using local-lvm thin for the guests (so basically I have local and local-lvm storages). The new node is having ZFS storage, one single raidz1, and I see on...
Hi there.
I remember reading in another thread in this forum that. this was not an Proxmox issue but some kind of bugs between lxcfs and cgroups2 or something like that but I am now unable to find the thread. For me, as all my containers are monitored from custom agents inside the container for...
@mathx Thanks very much for the workaround. I am a little reluctant to do this on a production server, but to be honest having to restart containers on a weekly basis is really a problem, so I will check this out.
Same problem here, all my containers are silently taking more memory in cache until oom-kill kicks in. This happens in all my containers but in pve6. In my case, restarting systemd-journald did not help, however reducing journals log size increased the available memory by the same size I...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.