Deciding between Proxmox and VMWare

MrTK

New Member
Nov 12, 2023
7
0
1
Hey Guy's,

Hope everyone is doing good!

Had a few questions on ProxMox as I've never used it before, vs VMWare suite of products to see regarding capabilities of both. Watched a few videos and read a few articles but though they did show specific use cases, I didn't get the full answer I was looking for:

1 - Am I able to setup multiple VLAN's using ProxMox as I normally do with VMWare set of tools like ESXi, VCSA etc?

2 - Can I control RAID levels, filesystem types (ie XFS vs EXT3-4) for individual drives used as datastores or does ProxMox fully control all drives on the system? For example, if I wanted to roll my own RAID 6 or setup a disk with XFS or ZFS for use with ProxMox via the shell, will ProxMox recognize such a disk and be able to use it as is? Another example, is can I use utilities such as HP's ssacli to create such storages via HW RAID controllers?

3 - KVM, can I access KVM VM defined variables (ie parameters) from within each VM I create with ProxMox? For example, can I access the VM specific's as were defined by the libvirt / KVM XML such as custom parameters defined on the domain XML file which would be accessible / retrievable from within the running KVM VM guest for use on the guest? Purpose is to use them to customize apps or the VM using these.

4 - Guest disk expansions and shrinking. KVM has a rather annoying requirements of having to copy the entire drive to resize it in the past. For resizing, one recommendation is to make a backup copy of the disk which is then used for input to the qemu-img convert to resize or shrink etc. Is there an easier way with ProxMox tools, perhaps in the UI or would this be something one would do by dropping to the shell and working with KVM / libvirt / qemu tools with?

5 - Does ProxMox offer an equivalent snapshot feature to the one on VMware solutions? It's instantaneous there and wondering if the same exists via ProxMox?

6 - Does ProxMox offer live migration between ProxMox hosts of guest VM's?

7 - Does ProxMox support adding and assigning physical add on PCIe cards to guest VM's, such as USB devices (ie thumb drives) or assigning PCIe HBA cards to individual guests?

8 - How did folks find ProxMox stability on random host reboots? In other words, if power outages occur, how resilient is ProxMox to handling these when the entire host crashes with 12+ guest VM's on it? Any bad experiences? :)

9 - Any hardware restrictions or limitations on usage of older hosts like early HP machines or off the shelves 10-15 year old physical boxes, off the shelf type of hardware? In other words, how old of a physical host can I use to install ProxMox on?

10 - If I have mixed storages, such as SAN, DAS, NAS mounts, can ProxMox support all of these, and can I use these simultaneously on the host? Some virtualization technologies for example, can't support DAS storage while also using NAS storage hence the ask here.

11 - ProxMox guest VM and host metrics and monitoring. What tools if any does ProxMox offer for this?

12 - Any gotcha's or things to keep in mind when using and moving to ProxMox from VMware?

13 - Converting a VMDK to KVM qemu drives. How easy is this and is there documentation?

14 - Migrating Windows VMware guests to ProxMox / KVM? Are there documented instructions? How easy is this if possible.

15 - What are the license restrictions with ProxMox, if any for basic home / lab use? ie non corporate use.

Cheers,
 
First, the products spelling is Proxmox VE or PVE and VMware.

Am I able to setup multiple VLAN's using ProxMox as I normally do with VMWare set of tools like ESXi, VCSA etc?
I find it much easier to do in PVE than in VMware.

Can I control RAID levels, filesystem types (ie XFS vs EXT3-4) for individual drives used as datastores or does ProxMox fully control all drives on the system? For example, if I wanted to roll my own RAID 6 or setup a disk with XFS or ZFS for use with ProxMox via the shell, will ProxMox recognize such a disk and be able to use it as is? Another example, is can I use utilities such as HP's ssacli to create such storages via HW RAID controllers?
Yes, again much easier (or just possible) in PVE, not in VMware.

KVM, can I access KVM VM defined variables (ie parameters) from within each VM I create with ProxMox? For example, can I access the VM specific's as were defined by the libvirt / KVM XML such as custom parameters defined on the domain XML file which would be accessible / retrievable from within the running KVM VM guest for use on the guest? Purpose is to use them to customize apps or the VM using these.
There are some dmi-related variables exposed to the GUI so that you can change them.

Guest disk expansions and shrinking. KVM has a rather annoying requirements of having to copy the entire drive to resize it in the past. For resizing, one recommendation is to make a backup copy of the disk which is then used for input to the qemu-img convert to resize or shrink etc. Is there an easier way with ProxMox tools, perhaps in the UI or would this be something one would do by dropping to the shell and working with KVM / libvirt / qemu tools with?
Growing is easy, shrinking is as hard as it would be on any hypervisor, because "it depends" on the guest.

Does ProxMox offer an equivalent snapshot feature to the one on VMware solutions? It's instantaneous there and wondering if the same exists via ProxMox?
Yes

Does ProxMox offer live migration between ProxMox hosts of guest VM's?
Yes, there is dedicated and distributed shared storage available.

Does ProxMox support adding and assigning physical add on PCIe cards to guest VM's, such as USB devices (ie thumb drives) or assigning PCIe HBA cards to individual guests?
Yes.

How did folks find ProxMox stability on random host reboots? In other words, if power outages occur, how resilient is ProxMox to handling these when the entire host crashes with 12+ guest VM's on it? Any bad experiences? :)
Running any server load without a proper ups is a bad experience. Buy proper hardware and you'll have no problems.

Any hardware restrictions or limitations on usage of older hosts like early HP machines or off the shelves 10-15 year old physical boxes, off the shelf type of hardware? In other words, how old of a physical host can I use to install ProxMox on?
Much better than what VMware offers. It's Linux, so it'll run on almost everything.

If I have mixed storages, such as SAN, DAS, NAS mounts, can ProxMox support all of these, and can I use these simultaneously on the host? Some virtualization technologies for example, can't support DAS storage while also using NAS storage hence the ask here.
Yes, that works.

ProxMox guest VM and host metrics and monitoring. What tools if any does ProxMox offer for this?
Influx and Graphite connector to directly export all built-in metrics.

Any gotcha's or things to keep in mind when using and moving to ProxMox from VMware?
Enjoy happiness. I don't get how people can work with VMware. We have SO MANY problems with it ... and every special bolt has to be paid with piles of money.

Converting a VMDK to KVM qemu drives. How easy is this and is there documentation?
Depends on what you find easy, yet there is the swiss army knive of vm disk converting tools qemu-img.

Migrating Windows VMware guests to ProxMox / KVM? Are there documented instructions? How easy is this if possible.
Depends of course on what you did inside of your guest, yet there is a general path.

What are the license restrictions with ProxMox, if any for basic home / lab use? ie non corporate use.
This is not VMware ... PVE is open source, so there are no restrictions. You can use it totally for free including getting updates from a potentially less-stable repository, the enterprise repository with the longest tests or most matureity packages is only included by the different subscription levels.
 
Hey Guy's,

Hope everyone is doing good!

Had a few questions on ProxMox as I've never used it before, vs VMWare suite of products to see regarding capabilities of both. Watched a few videos and read a few articles but though they did show specific use cases, I didn't get the full answer I was looking for:

1 - Am I able to setup multiple VLAN's using ProxMox as I normally do with VMWare set of tools like ESXi, VCSA etc?
Yes you can use vlans
2 - Can I control RAID levels, filesystem types (ie XFS vs EXT3-4) for individual drives used as datastores or does ProxMox fully control all drives on the system? For example, if I wanted to roll my own RAID 6 or setup a disk with XFS or ZFS for use with ProxMox via the shell, will ProxMox recognize such a disk and be able to use it as is? Another example, is can I use utilities such as HP's ssacli to create such storages via HW RAID controllers?
You can use anything linux can use, however i strongly recommend ZFS (dont use HW RAID with ZFS)
3 - KVM, can I access KVM VM defined variables (ie parameters) from within each VM I create with ProxMox? For example, can I access the VM specific's as were defined by the libvirt / KVM XML such as custom parameters defined on the domain XML file which would be accessible / retrievable from within the running KVM VM guest for use on the guest? Purpose is to use them to customize apps or the VM using these.
Proxmox does not use libvirt, the kvm config is in /etc/pve/qemu/ID.conf in a key: value format, you can get the info on the host by qm config ID, or via the proxmox api
4 - Guest disk expansions and shrinking. KVM has a rather annoying requirements of having to copy the entire drive to resize it in the past. For resizing, one recommendation is to make a backup copy of the disk which is then used for input to the qemu-img convert to resize or shrink etc. Is there an easier way with ProxMox tools, perhaps in the UI or would this be something one would do by dropping to the shell and working with KVM / libvirt / qemu tools with?
You can grow a disk easily, shrinking is more complicated. Imaging having XFS inside your VM, XFS does not support shrinking the filesystem at all, ext4 only allows shrinking offline, i cannot imagine vmware can shrink vm disks that way either
5 - Does ProxMox offer an equivalent snapshot feature to the one on VMware solutions? It's instantaneous there and wondering if the same exists via ProxMox?
Yes, depending on the Storage backend
ZFS snapshots are use on zfs
LVM snapshots on LVM-Thin
QCOW snapshots on qcow2 images
Ceph snapshots on Ceph, you get the idea
NO snapshots on raw images, obviously

6 - Does ProxMox offer live migration between ProxMox hosts of guest VM's?
Yes
7 - Does ProxMox support adding and assigning physical add on PCIe cards to guest VM's, such as USB devices (ie thumb drives) or assigning PCIe HBA cards to individual guests?
Yes
8 - How did folks find ProxMox stability on random host reboots? In other words, if power outages occur, how resilient is ProxMox to handling these when the entire host crashes with 12+ guest VM's on it? Any bad experiences?
No problems so far, but no guarantees, i recommend UPS against power outage ;)
9 - Any hardware restrictions or limitations on usage of older hosts like early HP machines or off the shelves 10-15 year old physical boxes, off the shelf type of hardware? In other words, how old of a physical host can I use to install ProxMox on?
Anything that runs linux 64 bit with virtualisation extensions, lxc containers you can run without theese
10 - If I have mixed storages, such as SAN, DAS, NAS mounts, can ProxMox support all of these, and can I use these simultaneously on the host? Some virtualization technologies for example, can't support DAS storage while also using NAS storage hence the ask here.
Yes you can have multiple Storages in Proxmox
11 - ProxMox guest VM and host metrics and monitoring. What tools if any does ProxMox offer for this?
The proxmox gui provides CPU / MEM / IO graphs, you can add monitoring solutions
12 - Any gotcha's or things to keep in mind when using and moving to ProxMox from VMware?
I find it much easier to work with, but having 25 years Linux/Debian background might make me biased ;)
13 - Converting a VMDK to KVM qemu drives. How easy is this and is there documentation?
qemu-image can convert multiple formats
14 - Migrating Windows VMware guests to ProxMox / KVM? Are there documented instructions? How easy is this if possible.
Probably, not familliar with it, i imagine you have to uninstall vmware drivers and install qemu virtio drivers
15 - What are the license restrictions with ProxMox, if any for basic home / lab use? ie non corporate use.

Cheers,
proxmox is open source software, you can use it with the no-subscription repo for any purpose for free.
I highly recommend getting a subscription for commercial purposes to get commercial support if needed.
The subscription gives you access to the subscription repo.
 
Three more questions:

16 - AD / LDAP / Kerberos / OAuth2 / SAML / Radius integration?

17 - RBAC? Is it extensive with many roles or at least capability exists to fine tune access for users?

18 - Does Proxmox VA have the concept of ESXi then VCSA to manage all hosts or is Proxmox VA installed on each individual node? Watched a clustering video and it appears individual Proxmox VA hosts talk together by connecting one to the other. The host explaining had to change the ID of each VM to then try and reconnect them later. However, Proxmox VA hosts somehow talked via a what looked like a floating IP so I'm guessing Proxmox VA made some use of keepalived / haproxy perhaps to present the user with a whole cluster view regardless of which physical Proxmox VA host the connection was made to? I did not undertand it fully. Hoping to get some insight here.

19 - Have 6 2TB SATA SSD's presented via a P440AR on this HP unit and 4 more 2TB Crucial NVMe drives plus Kingston(?) 2x250GB SATA SSD's in RAID 1 configuration via the P240AR. The last will hold Proxmox VE, and first two set's of drives will be independent ZFS storages. At least that's the plan for maximum redundancy in case of failure, which I prefer. The alternate being just present drives individually but that would have to leave redundancy to the VM guests then. Wanted to get folks feedback how everyone here set their hosts up for some best practices? When I tried ZFS on Linux nearly a decade ago it wasn't particularly great at the time but hearing great things now.
 
16 - AD / LDAP / Kerberos / OAuth2 / SAML / Radius integration?
Yes, but I'm unsure about Kerberos and Radius Integration, maybe SAML via Keycloak ... but additionally also OpenID (via keycloak).


17 - RBAC? Is it extensive with many roles or at least capability exists to fine tune access for users?
Yes, but inside of PVE, not externally via keycloak (or I don't know of)

18 - Does Proxmox VA have the concept of ESXi then VCSA to manage all hosts or is Proxmox VA installed on each individual node? Watched a clustering video and it appears individual Proxmox VA hosts talk together by connecting one to the other. The host explaining had to change the ID of each VM to then try and reconnect them later. However, Proxmox VA hosts somehow talked via a what looked like a floating IP so I'm guessing Proxmox VA made some use of keepalived / haproxy perhaps to present the user with a whole cluster view regardless of which physical Proxmox VA host the connection was made to? I did not undertand it fully. Hoping to get some insight here.
You can connect to each node and manage the whole cluster. There is no master and no managing node, every node is on the same level and all options are possible from each node.


19 - Have 6 2TB SATA SSD's presented via a P440AR on this HP unit and 4 more 2TB Crucial NVMe drives plus Kingston(?) 2x250GB SATA SSD's in RAID 1 configuration via the P240AR. The last will hold Proxmox VE, and first two set's of drives will be independent ZFS storages. At least that's the plan for maximum redundancy in case of failure, which I prefer. The alternate being just present drives individually but that would have to leave redundancy to the VM guests then. Wanted to get folks feedback how everyone here set their hosts up for some best practices? When I tried ZFS on Linux nearly a decade ago it wasn't particularly great at the time but hearing great things now.
ZFS is not a clusted filesystem, so If you want a cluster with HA, failover and shared storage, maybe just go with CEPH.
 
4 more 2TB Crucial NVMe drives
zfs (and ceph) will eat these because it has big overhead in writing data for VM.
zfs (and ceph) require datacenter grade ssd with many TBW + with power protection (=capacitors on 22110 or U2) to allow fast fsync I/O
 
Last edited:
16 - AD / LDAP / Kerberos / OAuth2 / SAML / Radius integration?
Yes, but I'm unsure about Kerberos and Radius Integration, maybe SAML via Keycloak ... but additionally also OpenID (via keycloak).
Noted, tyvm! Will try then.

17 - RBAC? Is it extensive with many roles or at least capability exists to fine tune access for users?
Yes, but inside of PVE, not externally via keycloak (or I don't know of)
Could you please elaborate a bit more? Not at all familiar with keycloak. If within PVE, then that's fine as long as it can pair up with one of the solutions listed in #16.

18 - Does Proxmox VA have the concept of ESXi then VCSA to manage all hosts or is Proxmox VA installed on each individual node? Watched a clustering video and it appears individual Proxmox VA hosts talk together by connecting one to the other. The host explaining had to change the ID of each VM to then try and reconnect them later. However, Proxmox VA hosts somehow talked via a what looked like a floating IP so I'm guessing Proxmox VA made some use of keepalived / haproxy perhaps to present the user with a whole cluster view regardless of which physical Proxmox VA host the connection was made to? I did not undertand it fully. Hoping to get some insight here.
You can connect to each node and manage the whole cluster. There is no master and no managing node, every node is on the same level and all options are possible from each node.
This is a huge plus and a smarter way of doing things IMO! Thank you!

19 - Have 6 2TB SATA SSD's presented via a P440AR on this HP unit and 4 more 2TB Crucial NVMe drives plus Kingston(?) 2x250GB SATA SSD's in RAID 1 configuration via the P240AR. The last will hold Proxmox VE, and first two set's of drives will be independent ZFS storages. At least that's the plan for maximum redundancy in case of failure, which I prefer. The alternate being just present drives individually but that would have to leave redundancy to the VM guests then. Wanted to get folks feedback how everyone here set their hosts up for some best practices? When I tried ZFS on Linux nearly a decade ago it wasn't particularly great at the time but hearing great things now.
ZFS is not a clusted filesystem, so If you want a cluster with HA, failover and shared storage, maybe just go with CEPH.
Noted, tyvm. Yep. ZFS is not a DFS, just local. Using GlusterFS for some time but nothing major like a datastore in my current setup, for good reason. GlusterFS been relatively stable through I only use on VM's. Do have my own one time horror story, is another reason. Never really got into CEPH due to some horror stories I've read about a long time ago but GlusterFS has at least significant performance problems as it's IO is bound on a single core via some single thread, that vastly limit it's potential and usability. So if you hit it with too much IO, it pop's because the core can't keep up.

4 more 2TB Crucial NVMe drives
zfs (and ceph) will eat these because it has big overhead in writing data for VM.
zfs (and ceph) require datacenter grade ssd with many TBW + with power protection (=capacitors on 22110 in U2) to allow fast fsync I/O
Interesting. I'm actually looking for the most direct path data can take to the drives to minimize excessive write operations. Already strained the system with one NVMe failing and crashing the entire host in one of the most spectacular ways I've seen.

Any thoughts on RAID 6, Software or HW and XFS? Wondering what's folks experience in terms of drive lifetime impact and fault tolerance when used with Proxmox VE? Had mixed results in testing. Tends to write quickly initially due to the cache, via something like copying an iso over and over to the same folder, but takes 15-30 minutes thereafter to fully sync data to the drives and was blocking things like hdparm (it timed out).

For the SATA SSD's, in various configurations, did reach 10GiB/s and went as low as 5MB/s, depending on the type of write happening via fio on 3 minute interval per thread. Not too concerned with read operations as these drives are so fast measuring read's is almost besides the point.
 
Could you please elaborate a bit more? Not at all familiar with keycloak. If within PVE, then that's fine as long as it can pair up with one of the solutions listed in #16.
Authentication works with almost all of them, yet roles have to be set inside of PVE. Default user is unable do do nothing, you need to add e.g. user1 to the admin-group and so on.
 
it's bad idea to use (moreover in production) a NOT datacenter (or vendor ssd, which are datacenter) behind a hw controller.
disk cache is disabled from hw controller, this allow hotplug and force the usage of the hw controller cache.
cache of consumer ssd is mandatory to not wearout too quickly.

hw raid1 with ext4/Lvmthin
(DC ssd recommended but wearout will be slower than zfs)
or
zfs raid1 without hw controller but a real HBA controller and DC ssd and many RAM.

(sorry for my wording)
 
12 - Any gotcha's or things to keep in mind when using and moving to ProxMox from VMware?

You may want to ask also on VMware (or other hypervisor) - centric forums. Because if there are people who jumped over and back or gave it a pass they won't be reading here. And do not expect constructive criticism on (any particular) forum focused on one solution only. :)
 
Any thoughts on RAID 6, Software or HW and XFS? Wondering what's folks experience in terms of drive lifetime impact and fault tolerance when used with Proxmox VE? Had mixed results in testing. Tends to write quickly initially due to the cache, via something like copying an iso over and over to the same folder, but takes 15-30 minutes thereafter to fully sync data to the drives and was blocking things like hdparm (it timed out).
The answer to this question is, as is everything, it depends- mostly on usecase and budget.

For a single host, you can use HW raid, MDADM, etc- but ZFS is by far the preferrable method. it is more resilient and works in conjunction with the host to have the best features vis-a-vis virtualization features. it has subvolumes, snapshots, in line compression, and more. The downsize is that zfs is ram hungry so ensure you're accounting for it in your deployment plan. As for RAID6- generally speaking, parity (or EC) raid performs poorly for virtualization. use striped mirrors for your vm storage. You can use RAID6 (or RAIDZ2) for payload data such as videos, documents, etc.

As for performance... you need to honestly assess your use case and requirements. For starters, understand that 1M sequential read will yield huge MB/S number on most storage options but is NOT WHAT YOU WANT. Virtualization thrives on low latency storage. having said that- Getting from 0 to 80% of your total realizable performance is relatively easy. getting to 90 will cost double. getting to 95 quadruple. etc. You would have the best overall experience if you plan out what you need ahead of committing any purchases.

If the question is with regards to cluster storage, thats a whole 'nother conversation.
 
You can connect to each node and manage the whole cluster. There is no master and no managing node, every node is on the same level and all options are possible from each node.

This is a huge plus and a smarter way of doing things IMO! Thank you!

Until the proxy-ing breaks. There is the JS SPA that talks to the node via API, that hits a proxy which runs equally on each node. SSH keys are thrown around in bizzare way, to this moment I cannot put a new node in and give it the same name/IP after a dead one with a peace of mind everything will keep working. Having said that, it is all based on static IPs, so I suppose everyone better runs this behind NAT for the management of nodes, if you can't influence the range of IPs you get not that great. Forget DHCP (or SLAAC) in a cluster, not fun if your IPv6 prefix changes either. No mass deployment via unattended PXE install out of the box. Cannot use DNS, the nodes have essentially mind of their own for names "resolution".
 
8 - How did folks find ProxMox stability on random host reboots? In other words, if power outages occur, how resilient is ProxMox to handling these when the entire host crashes with 12+ guest VM's on it? Any bad experiences? :)

Just earlier today there was a post: https://forum.proxmox.com/threads/unexpected-fencing.136345/

Clusters are very different experience than running it on a single node, lots of depends on corosync + own HA resource manager. It requires low latency environment. I do not have enough experience with this myself yet and of course there might be million people running it all fine and it just seems so, but lots of questions on the forum regarding rebooting of cluster nodes in different situations (besides where hardware was clearly to blame).
 
Until the proxy-ing breaks. There is the JS SPA that talks to the node via API, that hits a proxy which runs equally on each node. SSH keys are thrown around in bizzare way,
I dont understand what this means. care to elaborate?

To this moment I cannot put a new node in and give it the same name/IP after a dead one with a peace of mind everything will keep working.
So dont do that. its actually explained in the documentation. Why is this important?

Having said that, it is all based on static IPs, so I suppose everyone better runs this behind NAT for the management of nodes,
If you wish to have a singular endpoint to the management api, that is correct, but I would restate it "Getting a single api endpoint is as simple as creating a one to many NAT. you can even load balance multiple API requests across multiple nodes."

if you can't influence the range of IPs you get not that great.
I dont understand what that means. Please elaborate.

No mass deployment via unattended PXE install out of the box.
Says who? all it means you have to preplan your IP mappings. It would be nice if there was a proxmox blessed way to do this, but its not like there arent other means.
Cannot use DNS, the nodes have essentially mind of their own for names "resolution".
For clusters GENERALLY its a good idea to use fixed names/ips and keep the dns mapping in hosts files. you dont want a dns outage to crash your entire infrastructure.
 
I dont understand what this means. care to elaborate?

I did not mean to go on bashing tonight, just give examples. There's other threads. :)

So dont do that. its actually explained in the documentation. Why is this important?

No, it's not explained, it is explained that one should not ever turn on a node that was removed from cluster again. This is not what I meant. I meant a node that actually dies (you can't plan for that), so you delete it from the (rest of) the cluster, successfully, you install a new node but give it same name/IP and everything gets messed up. There's a another thread for that and older bugreports still open. It is important to be able to reuse IP addresses if for nothing else than because one does not want to keep track of what was once lost and keep a blacklist.

If you wish to have a singular endpoint to the management api, that is correct, but I would restate it "Getting a single api endpoint is as simple as creating a one to many NAT. you can even load balance multiple API requests across multiple nodes."

I did not mean this, if I wanted single endpoint I could easily round-robin the cluster, it sounds all great. What I meant was that if you get a public range of IPs or even non-public but not assigned arbitrarily by you for your set of nodes, it's really horrible if you later have to move it because it was so IP dependent.

I dont understand what that means. Please elaborate.

You put your cluster in a datacentre and for that network segment you are allowed to use 172.16.x.y and then you are moving it somewhere where you get 172.20.x.y ... you will have fun. Got a new IPv6 prefix? Same situation.

Says who? all it means you have to preplan your IP mappings. It would be nice if there was a proxmox blessed way to do this, but its not like there arent other means.

Everyone has to run their own debian pxe install for that. It is literally not supported as any other than ISO installation is not. Not saying impossible, but you are on your own.

For clusters GENERALLY its a good idea to use fixed names/ips and keep the dns mapping in hosts files. you dont want a dns outage to crash your entire infrastructure.

Fixed names yes, although even that should not be important, node names should not become unique IDs. Fixed IPs, obviously not of the same opinion due to the above. DNS mapping in host file-S, definitely not agree. There's currently literally one /etc/hosts for every node and it is not synced across nodes, yet it has to contain a reference to self for no good reason. Duplicate there's the same in corosync.conf, but that is not a resolver config nor a hosts file, obviously. Meanwhile the ssh connection rely on fixed IP addresses but at the same time aliased keys. Some goes through SSH, some through SSL API calls. This is not about wanting the nodes to be relying on DNS that can have outage, this is about having them use autonomous resolver at any given time which will keep working if they become an island, but that can be externally updated if need be centrally via authoritative DNS, which if down, no problem, the last known config will be run till next update.

But let's not hijack the thread. :)
 
But let's not hijack the thread. :)
fair enough.

I do want to just GENERALLY note that proxmox (like other clustered software) does not look at individual nodes as important. the principal of "cattle not pets" applies here- individualistic elements such as names and IPs are not really important. The "proper" way to deal with any changes is to get rid of a node and add a new one, and I agree that means that a lot of customization is left to user provided automation means. In that respect, other clustered options (Xen, VMware) maintain a dedicated management instance that doesnt need changing has benefit over managing your api heads seperately; I also agree with you that using name/ips as unique identifiers is not ideal.

But having said all that, once you understand those design criteria, the system is completely workable and doesnt pose any ACTUAL limits. There isnt any NEED to have replacement nodes named the same as prior- they're cattle, not pets.

You put your cluster in a datacentre and for that network segment you are allowed to use 172.16.x.y and then you are moving it somewhere where you get 172.20.x.y ... you will have fun. Got a new IPv6 prefix? Same situation.

This is true FOR ANY CLUSTER. why would you ever do that?! dont move clusters. stand them up anew.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!