@waltar I wanted to avoid rebooting it, as when I do the upgrade in the production cluster I will not have the option to do it unless I migrate all the containers to other servers, but after rebooting, everything went to normal.
Not really a good experience. Until now all the minor upgrades I...
This is a standalone server, no cluster, I am running as my home lab; all the upgrades are done first to this server before going into other environments.
I have just simply upgraded from PVE 8.1 to 8.3 and suddenly all the lxc containers appear like this. Virtual machines are ok and unaffected...
Thanks @t.lamprecht.. I used this thread as I am using Tuxis for the backup using their free tier. Apologies if this was not the place.
yes, I know PVE6 went EOL, simply Put in our test the migration process was not as easy or trouble-free as we expected, but we are on it, Thanks!
Hi! Quick question; is there a way to use namespaces if I am using pve6.x ? obviously in the PVE WEBUI there is no namespaces field, but maybe there is some kind of trick through the datastore name or via cli or something like that.
(and yes, I know I have to upgrade, and yes we are working on...
Hi
We are building a new production cluster with Pve8.2; we have received the hardware and starting building the cluster and doing some tests. The goal is to go live on October with this new cluster, so purchasing the subscription 3 months in advance sounds like wasting 3 months of the...
Thanks Stoiko. While I have not gone in detail yet through the pve8 documentation, I see that both rocky8 and centos7 were supported in pve7 and pve9 will not support legacy cgroups controller, but I am a bit lost so far with pve8, can you point me to any specific/already known issues or areas I...
Hi Victor, thanks for your suggestions.
I know i can upgrade from 6,to 7 and then 8, but in my tests the upgrade process from 6 to 7 has proven to be unreliable in a small percentage of the test, but not so small to discard the risk for the production environment. I am running a 8 server...
Hi
I am going to migrate from proxmox6 to proxmox8.1 by installing a new cluster and re-provision all my lxc servers and Kvms in a kind-of blue/green upgrade. I am using ansible to create lxc and KVM so provision is simple and fast (well, not so fast, but it is automatic) , but I am not finding...
Just in case it helps to discard something, I am running proxmox6 with Centos 7 and Rocky8 containers without a problem. I also have the setting in grub (systemd.unified_cgroup_hierarchy=0) in order to allow compatibility when upgrade to pve7 (maybe some day I will consider it stable enough!)
@jpancoast would you mind sharing the packer configuration for creating the container? There are some examples with isos, but not so many with containers
Hi, just writing this in case it helps anyone else.
Suddenly all the lxc containers of a node in a cluster failed to start, the only clues were as follows
root@proxmox-2:/var/log# pct start 129
setup_resource_limits: 2517 Unknown resource
lxc_spawn: 1813 Failed to setup resource limits...
Long story short stoping (yes, stopping) the services in one node, fixed the problem in all the nodes. I do not know how or why, but
systemctl stop pve-cluster && systemctl stop corosync && systemctl stop pvedaemon && systemctl stop pveproxy && systemctl stop pvestatd
and everything worked...
I have a 9 node production cluster based on pve6 (pve-manager/6.4-13/9f411e79 (running kernel: 5.4.143-1-pve)). The older server in the cluster have been up & running 563 days while the newest is 201 days so far. 2 days ago all servers in the cluster become grayed out, but I was able to access...
Hi
I am writing this here in case it can help someone else or, worst case scenario, a future me.
I have a 5 node cluster (pve6) with two corosync rings to avoid losing nodes or the cluster due to network issues, so it came as a total surprise that one of the nodes suddenly appeared as greyed...
Hi! i have the same situation, I see a lot of flapping in my corosync interfaces. What I have found is that using corosync in bonded interfaces, in my case double bonding (linux HA bonding over two lacp bonds to different switches and racks ) is actually causing this flapping, actually without...
Same happened here. I am doing some initial test to migrate our environments, and i had to do exactly the same.
Regarding this
cat /etc/default/grub.d/proxmox-ve.cfg
This file does not exists on any on my servers; 4 clusters updated to pve6.4-15, but the file is not there.
Should i create...
it seems I still need a shared storage between the two nodes to be able to restore backup stored in one server into another. I can manage that, thanks anyway.
Thanks Fiona, i have gone through it and it works perfectly. I was expecting this to happen automatically somehow, but its quickly fixed. I’d suggest though some note or comment in the documentation when adding a node to the cluster.
My problem now however is different, it seems i can not...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.