Proxmox or VMWare, would like to get some more arguments

Ovidiu

Renowned Member
Apr 27, 2014
324
12
83
I am looking to setup a new project at work; single server hosted in the cloud, a few HDs, nothing too complicated. I personally have experience with Proxmox but also work with VMWare daily at work. We do have enough spare licenses for VMWare so that price isn't an immediate argument.

I have see the basic comparison: https://www.proxmox.com/en/proxmox-ve/comparison but that doesn't really help much.

So far, both projects have the features I need.

The differences I found so far are:
- VMWare seems to only do per VM encryption
- Proxmox is able to do full disk encryption if setup on top of say a Debian install

Proxmox can use ZFS or LVM-thin but VMWare has similar features or am I missing something important here?

For backup I can use Veeam Agent for Linux to backup VMWare VMs while on the other side I cna use the built-in backup of Proxmox. Both can use a similar number of backup targets so no big difference.

Obviously this is a Proxmox forum but I'm still looking to get some more criteria for me to compare.
Anyone wants to chime in with some pros and cons to help me decide?
 
You are of course right and I am a supporter, its just I need to present management with my reasoning for NOT using our existing VMware license hence I am looking for interesting features that set Proxmox apart from VMWare.
 
Better hardware support, included ZFS and Ceph distributed Storage are winners here.

VMware got also distributed storage (of course, for extra money) but I do not see any ZFS similar filesystem on VMware.
 
We have clustering and do not require a management node. We also support LXC Containers ...
 
I dont know if ESX + Veeam offers the "by the minute" snapshotting abilities of pve-zsync - depending on your scenario, this can also be handy.

In general I think anything VMWare can do, so can Proxmox, but with a bit more flexibility, while vmware needs 3rd party apps like Veeam to add functionality - it is nice to know that they have thought about everything, and continue to keep an open mind, while for over a decade vmware seems to have a secret agreement with veeam to not step on their turf.

The development cycles of Proxmox seem to be faster (adding new features?) than vmware which by nature of commercial software is a sluggish beast.

I love not having to run the bulky vcenter manager, and being able to manage from any device with a browser.

Education wise, Proxmox feels like it has an easier learning curve vs vmware, anyone educated on vmware will probably have a fairly easy time of grasping Promox.

Probably the best route is to make a test bed for your associates to assess how your usage scenario responds and make an informed decision based on that.
 
Hi, just a few thoughts,

- it is hard to make a compelling argument / key points if we don't know what your core requirements are for your deployment config. (ie, can't address key points if we don't know what they are ?)

- end of the day, if you are doing a 'very simple virtualization' setup (ie, single physical host, subdivide into a few VMs, basically nail it up and then don't touch it except for patching.. then many commodity hypervisors can clearly 'meet that requirement'.

Possible compelling reasons for proxmox specifically over vmware in such a scenario ?

-- vmware even when 'you have free licenses floating around' is inherently not a free product, and not priced with 'all features in' but rather a sliding scale of features based on your price point. ie, a free loss-leader version that is crippled on many features; then a 'cheap' version which is functional but still missing many features; all the way up to the full-meal-deal versions which are very much less cheap.

-- vmware licenses may not continue to be freely available always floating around? In which case a migration or an expense is in the future of this project ? Proxmox on the other hand won't be changing the price structure, you can subscribe for support; or not if you really prefer to DIY support.

-- proxmox is likely simpler to patch and manage in the long term; so simpler patching == more frequent patching == less risk exposure since you track recent patches more tightly.

-- ultimately I suspect it is the ease of management; and thus longer term 'reduced effort' which could be your more compelling arguments in a 'simple use case side-by-side compare'.

Sometimes these arguments fall on deaf ears, of course, because some people are well trained to think "Virtualization === VMware" and that is the end of the discussion. In such cases, the argument is best conceded gently, after pointing out the obvious win scenario points; and not worrying if 'decision is not made on merit or logic'. Since often these sorts of decisions have nothing to do with either.


Tim
 
  • Like
Reactions: Xahid
If it was an earlier version of proxmox - no contest, proxmox wins. However, after upgrading to proxmox v4 I am seriously considering vmware.

I have multiple issues with stability since the upgrade, I have VM's losing connection to their disk images, I have pveproxy and spiceproxy completely crashing on me. Fsyncs to my network storage dropped form in the thousands to 28 - 40 per second.

In short, I would not recommend proxmox in it's current state to anyone.
 
Looks like your upgrade process failed?. Proxmox VE 4.x is very stable. If you see slow fsyncs, there is for sure a reason for it, this is not an Proxmox VE issue.
 
ProxMox was the only thing that changed.
I did full backup, clean install of 4, recreated cluster and restored backup op VM's

Currently Pveproxy is down on most of my cluster, it was running fine after the upgrade until servers started operating under heavy load - then everything went pear-shaped.

systemctl status pveproxy.service
● pveproxy.service - PVE API Proxy Server
Loaded: loaded (/lib/systemd/system/pveproxy.service; enabled)
Active: failed (Result: timeout) since Tue 2017-06-13 06:49:08 SAST; 1h 55min ago
Main PID: 73644 (code=exited, status=0/SUCCESS)

Jun 13 06:46:08 vwk-prox06.namaquawines.local systemd[1]: pveproxy.service start operation timed out. Terminating.
Jun 13 06:47:38 vwk-prox06.namaquawines.local systemd[1]: pveproxy.service stop-final-sigterm timed out. Killing.
Jun 13 06:49:08 vwk-prox06.namaquawines.local systemd[1]: pveproxy.service still around after final SIGKILL. Entering failed mode.
Jun 13 06:49:08 vwk-prox06.namaquawines.local systemd[1]: Failed to start PVE API Proxy Server.
Jun 13 06:49:08 vwk-prox06.namaquawines.local systemd[1]: Unit pveproxy.service entered failed state.
Jun 13 08:42:48 vwk-prox06.namaquawines.local systemd[1]: Stopped PVE API Proxy Server.

ps aux |grep pveproxy
root 21778 0.0 0.2 239608 65896 ? Ds 06:25 0:00 /usr/bin/perl -T /usr/bin/pveproxy stop
root 22367 0.0 0.2 239604 66224 ? Ds 06:32 0:00 /usr/bin/perl -T /usr/bin/pveproxy start
root 23668 0.0 0.2 239592 66100 ? Ds 06:44 0:00 /usr/bin/perl -T /usr/bin/pveproxy start
root 27516 0.0 0.2 239588 66328 pts/0 D+ 07:27 0:00 /usr/bin/perl -T /usr/bin/pveproxy start
root 35291 0.0 0.0 12732 2136 pts/1 S+ 08:52 0:00 grep pveproxy
root@vwk-prox06:~# kill -9 27516
root@vwk-prox06:~# ps aux |grep pveproxy
root 21778 0.0 0.2 239608 65896 ? Ds 06:25 0:00 /usr/bin/perl -T /usr/bin/pveproxy stop
root 22367 0.0 0.2 239604 66224 ? Ds 06:32 0:00 /usr/bin/perl -T /usr/bin/pveproxy start
root 23668 0.0 0.2 239592 66100 ? Ds 06:44 0:00 /usr/bin/perl -T /usr/bin/pveproxy start
root 27516 0.0 0.2 239588 66328 pts/0 D+ 07:27 0:00 /usr/bin/perl -T /usr/bin/pveproxy start
root 35372 0.0 0.0 12732 2216 pts/1 S+ 08:53 0:00 grep pveproxy


Creating backups of the VM's also seems to cause issues.

[Sat May 27 00:18:34 2017] INFO: task vzdump:143808 blocked for more than 120 seconds.
[Sat May 27 00:18:34 2017] Tainted: P O 4.4.49-1-pve #1
[Sat May 27 00:18:34 2017] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[Sat May 27 00:18:34 2017] vzdump D ffff880002e67df8 0 143808 143807 0x00000000
[Sat May 27 00:18:34 2017] ffff880002e67df8 ffff88046e7b75c0 ffff88086ca42a00 ffff880074880e00
[Sat May 27 00:18:34 2017] ffff880002e68000 ffff880466d983ac ffff880074880e00 00000000ffffffff
[Sat May 27 00:18:34 2017] ffff880466d983b0 ffff880002e67e10 ffffffff8185c215 ffff880466d983a8
[Sat May 27 00:18:34 2017] Call Trace:
[Sat May 27 00:18:34 2017] [<ffffffff8185c215>] schedule+0x35/0x80
[Sat May 27 00:18:34 2017] [<ffffffff8185c4ce>] schedule_preempt_disabled+0xe/0x10
[Sat May 27 00:18:34 2017] [<ffffffff8185e1c9>] __mutex_lock_slowpath+0xb9/0x130
[Sat May 27 00:18:34 2017] [<ffffffff8185e25f>] mutex_lock+0x1f/0x30
[Sat May 27 00:18:34 2017] [<ffffffff8121f9ea>] filename_create+0x7a/0x160
[Sat May 27 00:18:34 2017] [<ffffffff81220983>] SyS_mkdir+0x53/0x100
[Sat May 27 00:18:34 2017] [<ffffffff81860336>] entry_SYSCALL_64_fastpath+0x16/0x75
 
  • Like
Reactions: Ovidiu
I'd say that VMware beats Proxmox in regards to Backup. With VMware + other Software you can restore single files for a specific time (or at least for a specific Backup). This even works with incremental Backups which speeds up the Backup process a lot.
Apart from that I'm very happy with Proxmox and would recommend it fully.

Regards,
Jonas
 
vSphere/ESXi:
+ lightweight type-1 hypervisor
+ very low memory/disk footprint
+ docker
+ imho better web-management for single host (embedded client)
+ runs from memory (can be installed on sd/usb)
+ better pass-through functionality
- steap learning curve
- not so stable (I had quite many PSODs, sometimes due to broken patches!)
- limited HW-support (even the one on VMware HCL might not be fully supported)
- difficult customisation (local software/configuration management)
- its firewall is really bad joke
- no disk caching (either good NAS or local hw-raid with big cache)
- backup-api locked out for free version

Proxmox:
+ good hw-support
+ easy learning (if you know linux already)
+ easy customisation
+ full set of standard linux tools
+ LCX (imho loosing to docker)
- quite "fat" type-2 hypervisor
- imho web-interface changed to worse (I'd go back to horisontal tabs immediatelly if I could)
- running from disk (do not recommend to install on sd/usb at all, and if ssd, only good one)
- services fixed to certain ports
- pass-through still experimental
- missing docker badly!

This is my very subjective comparison of ESXi (free) and PVE (non-subscription), being used on solo hosts with local storage, for non-critical application. Your opinion might vary...
 
Yes it is, and I'm running it, but it is not integrated in PVE. You have one interface for KVM+LXC, but different one for Docker. This makes management at best sub-optimal.

BTW, it is at least questionable if it is "the best" to run Docker inside VM. When running directly on host, it can benefit from the same advantages, as LXC running on host (not on KVM-vm). Ultimately, LXC containers on PVE are running on host, right?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!