Ceph really, really wants homogeneous hardware. Meaning same CPU, memory, networking, storage, storage controller, firmware, etc.
While it's true you can run a 3-node cluster, you can only have a 1-node outage. With 5-nodes, can have 2-node...
This statement is a bit too general or dogmatic for my taste. I know of several reports in this forum, where people use three-node-clusters in production in smbs.
Usually they didn't had the budget for five nodes, but can tolerate the risc of an...
I did intent to use the same CPU family.
Let's say ( CPU's chosen to make the question more clear)
Node 1, 2 and 3: AMD EPYC™ Turin 9475F - 48C/96T - 3.65GHz - 4.8GHz boost - 256MB - 400W - SP5
Node 4 and 5: AMD EPYC™ Turin 9135 16C/32T -...
I am also a jilted VMWare user and just discovered Proxmax. Still reading through best practices for installing and migration (we have very small footprint, our lab <10 win & linux RHEL nodes plus 4 VM hosted via VMWare 6.6 on one DELL R720 (40...
Just rebooted my Proxmox server and now it comes up with Kernel Panic block 0,0 - I've attached the screen shot.
A while ago I removed old unused kernels but I thought not the last plus one other. Maybe that is the issue as the server has not...
Shared to several places, let's hope the word gets out and we can support VMWare ESXi users looking for a welcoming community to migrate towards. Thanks Tom and the rest of the Proxmox team for putting in some hard work into making this community...
Wegen diesem komischem qcow2 nbd modul mount, wo sich anschliessend das modul nicht wieder entladen läßt ...,
--> auf .raw umgestiegen, ist eh einfacher und geht für vm & lxc, zudem machen wir den mount direkt auf fileserver, kannst natürlich...
Set in ~/.bashrc:
alias mmon="ha-manager crm-command node-maintenance enable"
alias mmoff="ha-manager crm-command node-maintenance disable"
and then do like
mmon <pvenode>
Eh, not sure that's working actually: https://forum.proxmox.com/threads/if-reboot-is-triggered-pve-node-goes-away-too-fast-before-ha-migration-is-finished.170268/. But yes, in theory that should work. :) I'd prefer to move VMs before installing...
By default we tried the "automatic move on reboot" but it was too hard for the cluster, we had some packet lost and some hang on VM.
We did not take the time to investigate whether it was possible to configure settings to make this ‘live...
I have the same issues on NoVNC and Guacamole VNC connections.
Only on Windows VMs with spice-tools installed and Clipboard set to VNC.
Downgrading was the only way to resolve this for me as well.
Migrating Dell VMware clusters at work to Proxmox. I just make sure all the hardware is the same (CPU, memory, storage, storage controller, networking, firmware, etc).
Swapped out Dell PERCs for Dell HBA330s since ZFS & Ceph don't work with RAID...
This is exactly w
This is exactly what I was looking for. Real world advice.
I have a couple of HPE ProLiant DL360 Gen 9 and one HPE ProLiant DL360 Gen 10. I am not sure if I should try to use these or buy new servers.
Again, Thank you for...
Eh, not sure that's working actually: https://forum.proxmox.com/threads/if-reboot-is-triggered-pve-node-goes-away-too-fast-before-ha-migration-is-finished.170268/. But yes, in theory that should work. :) I'd prefer to move VMs before installing...
100 - 22 = 78?
Ja diese meinte ich. Du musst aber nicht unbedingt dieses Modell nehmen. Zb. diese oder diese die günstiger sind wären sicher auch okay.
Warum eine PLP/DC Platte? Schau mal hier:
-...
Thanks, I reworded my ramblings. I wanted to point out, that even the maintenance-mode is not needed if you configure the shutdown policy correctly. Then in case of an reboot the ha-manager will take care to do everything for you ;)