I am reinstalling from 6.4 to 7.3 in production so I have to do 2 max nodes at the time. I am at the 4 remining nodes in a 5 node cluster (removed 1 node already). Tomorrow I am planning to remove 2 nodes out of 4 working nodes leaving just two nodes which will give me no quorum. What is the...
As stated the longer the VMs are running the higher swap usage on the host. I have swappiness set to 10 but with VMs running for 300-400 days the swap is getting full. I can see when rebooting 5-10 of them the swap would go down 10-20%.
Is there particular setting that manages the user of swap...
Similar message here , 5 node cluster with Dell Servers on Intel CPUs, fully updated version 6 with community subscription. I had to reboot two servers for maintenance in last two month and each had this message:
Failed deactivating swap /dev/pve/swap
A Stop job is running for /dev/dm-8 (8...
To answer your other questions at least as 6.4.-1 is concerned, I think that the HA is a cluster and not a node setting. You setup it up under Datacenter. Then you tell the datacenter which VMs are participating and which are not. Perhaps you setup the state of the VM to something other then...
That has not been my experience but I am on 6.4-1 version. I have mixed VMs on a cluster participating. Majority of VMs participate in HA but small number of them do not. At least on 6.4-1 it is not a problem and you can create and run VMs not participating in HA on that node.
Thx
Are you planning to just run Proxmox on SSDs which is not that big o of a deal ? If you are planning to run your VM from SSDs there are few things to consider. What type of VMs , what is the storage type etc.
As a general rule you can run Proxmox on regular SSDs, running VMs on SSDs in...
Thank you for the link. I wonder when this changed , the default on Proxmox is still KVM64 on 7.3-4 - perhaps is it is the most compatible one. I started to use host as our CPUs in the cluster are identical - I haven't had any issues. I am curious what's the best compromise between the...
I just live migrated back and forth one of the systems across 4 nodes several times. No issues with the same CPU model. The system is stable and operational after 6 live migrations have host type CPU configured.
I will test that today. LnxBil I see that in the link VictorSTS provided and I was confused by this as well.
Proxmox documentation states:
"In short, if you care about live migration and moving VMs between nodes, leave the kvm64 default. If you don’t care about live migration or have a...
I have all nodes with exactly the same CPUs model , core count etc. In general is there significant increase in CPU performance in the host type vs. default kvm64 ?
I have all VMs set to kvm64 but I was reading some Proxmox documentation and it says: "If you don’t care about live migration...
I am trying to boot to BIOS on the console to try to boot from another source like ISO and I need to do that that way for various reasons. I can not sent the brake signal "esc" as the console is delayed and boots the system.
Is this still the best way of doing it:
qm sendkey vmid esc
perhaps...
I need it for testing, specifically to check what is the ceph clietn version and if it is updating during updates from 6.1.x to 6.4.x on which my current cluster is. It is regarding another post...
Thank you for prompt response.
In the 5 node cluster that was affected none of the nodes have ceph installed. They are just connecting to the existing/older ceph cluster that consists of 4 nodes. They are using ceph-fuse client version 12.2.11+dfsg1-2.1+b1 and the new cluster on 16.2.7 says...
I have a 5 node PVE cluster (ver 6.4-1 running kernel: 5.4.124-1-pve) with existing 4 node ceph storage installed under proxmox (ver 14.2.6) - working great for at least 700 days.
Today I added secondary 4 node ceph cluster working under proxmox ver 16.2.7. This cluster was working in the lab...
Empirically tested, working with 2 out of 4 nodes if they are the right ones :-)
Can not change the number of OSDs as the servers have only 4 bays and need two for system RAID - it is kind of a small cluster with limited RAM for non demanding VMs. No issues with OSDs I have relatively large...
ok, that makes sense I was just hopeful that I missed something based on aaron post referencing 4 nodes with 2 nodes down.
Perhaps there is a way to rig it just like we can do pvecm expected -1 to make the PVE working when it looses quorum, is there something similar that can be done for ceph...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.