I have a 4 nodes cluster composed by pve-01, pve-02, pve-03 and pve-04.
Now I need to upgrade pve-01, so I would like to dismiss the old server and then add a more performant one.
So, my quesiton is: can I remove pve-01 from cluster and then add the new server also named pve-01? Can be problems...
I cut off the head of the snake: I tried, on a second server which had full RAM too, to create two new VMs, each with 4GB of RAM, and without ballooning.
The system lowered RAM consumption automatically and than raised it again. (see graph at 11:11 and 11:21)
This second server was fuller...
Thank you @Joris L. for your kind reply. I was going to disable KSM as you suggested, but I noticed that now memory consumption is lowered to 27.62GB. That is absurd, yesterday morning it had raised to nearly 60GB and in the meanwhile I did really nothing to change the situation. I wasn't even...
I'm having same behaviour here. I have a bare metal with 2 x1TB VNMe set as ZFS (RAID 1) and 64GB RAM.
I'm running 6 VMs with ballooning enabled for a supposed total of 26GB of RAM.
RAM graph shows 49 GB of usage and no KSM sharing.
Is there a way to know what is consuming the other 23GB of...
I had a similar problem today: I tried to import a VirtualBox VDI, but the VM didn't boot in any way. I suppose it was due to GPT or UEFI. I added UEFI disk, set BIOS and all, but nothing.
At last, I solved converting VM to OVF from VirtualBox server with:
vboxmanage export VM_Name -o...
Same problem here with ProxMox 6.3-2: created cluster from first node pve-01, joined cluster from second node pve-02.
Everything worked fine on pve-01, including pve-02 full management. But got two issues on pve-02:
pve-02 lost its SSL certificate and got invalid SSL certificate error from...
Is this a typo?
post-up echo > /proc/sys/net/ipv4/ip_forward
Shouldn't it be:
post-up echo "1" > /proc/sys/net/ipv4/ip_forward
Or am I missing something?
I don't know if it is still relevant, but I made the same today on a OVH server, and worked like a charm. ProxMox identify two network cards for OVH server with vRack:
eno1: the network card attached to public network
eno2: the network card attached to vRack
Go to proxmox web GUi. In "System"...
Yes, I never used it, but I know it is possible using IPMI. So, your suggestion is to use Proxmox 6.2 custom ISO and choose a 3 disk array ZFS during installation?
Hello,
I have a OVH bare metal server:
Intel Xeon-E 2136 x6 core 3.3GHz
128GB DDR4 ECC RAM
3x 1TB NVMe with no raid controller
I would like to switch from Ubuntu 18.04 and Virtualbox to Proxmox. OVH provides 3 ready configurations for proxmox:
Proxmox 5 VE (3x soft RAID 1)
Proxmox 5 VE ZFS...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.