Does Proxmox degrade Windows performance?

Perrone

New Member
Mar 30, 2022
13
2
1
Is Proxmox is able to run a Windows VM with a close-to-native performance?

I have searched exhaustively for a benchmark of Native installation vs VM installation, but no luck there. Strangely, I couldn't even find any clue for the answer.

Any external links or user reports are very welcome. Thanks.
 
You always get overhead when using virtualization and a virtualized Windows will never be as fast as a bare metal Windows. How big the overhead will be really depends on a lot of factors. CPU overhead isn't that bad if you choose "host" as the CPU type. If you use something else for better compability like "kvm64" as CPU type, your physical CPU won't make use of all instruction sets it would be able to use, so your virtualized CPU might be way slower. For RAM the overhead I've seen was going from very high to very low. Stuff like overprovisioning, ballooning and KSM for example might reduce the RAM performance. Where the overhead is really terrible is the storage because each additional layer of virtualization, nested filesystem, CoW, thin-provisioning, storage, abstraction, mixed blocksizes, encryption, ... won't just add up the overhead, it will multiply, resulting in a exponentially growing read/write amplification.

Best you use the forums search function and search for some user benchmarks. And you don't need to pay to test PVE with all its features. So if you already got a machine and a spare SSD you could easily do some benchmarks by yourself on your own hardware.
 
Last edited:
  • Like
Reactions: Perrone and LnxBil
Yeah, I don't expect a VM to get the SAME performance as native install. Though I would like to know what's the performance loss in the fastest possible VM setting. I mean the setting with least number of abstractions. In such case, would it be possible for a Windows VM running alone to reach 95% of native performance? Or should I expect no more than 50%?

Things like the system startup time, system reboot time, video processing benchmark, 7-Zip compression benchmark, etc.

IMHO, for people who don't know Proxmox, this is the top of mind thought when deciding rather to try Proxmox or not.

But all I can find are disk benchmarks and "VM vs VM" comparisons. :confused:
 
Nothing on userbenchmark , nor on this forum. Only openbenchmarking site has this windows 2019 proxmox benchmark (see PDF version too). Trying to cross-compare it to native installs. It does not specify the actual hardware behind, but says:
  • CPU: 4x Common KVM @ 3.59 GHz (that seems similar to an AMD Ryzen 3 3100 that has 4 cores)
  • Storage: 60 GB VirtIO Disk (no idea what to compare that to)

Test TypeDescriptionProxmox 7Native installSelected compare
John The Ripper
1.90-jumbo-1
a password cracking tool
(more is better)
MD5: 174,570
Blowfish: 6,614
MD5: 437,922
Blowfish: 5,551
for MD5: Ryzen 3 3100
for Blowfish: Ryzen 3 3200G
John The Ripper 1.7.x
BlogBench 1.1a filesystem benchmark tool that simulates a realistic load
(more is better)
Read: 21,438
Write: 616
Read: 480,592
Write: 4,262
Median
BlogBench 1.0
7-Zip Compression 16.02file compression speed
(more is better)
19,57232,636Ryzen 3 3100
7-Zip Compression 16.02
OpenSSL 1.0.1g
RSA 4096 bit Performance
data encription test
(more is better)
176.7141AMD Ryzen 5 2600
Open SSL 1.0.1g
PHPBench 0.8.1
PHP Benchmark Suite
dode execution
(more is better)
383,072691,894AMD Ryzen 3 3300X 4-Core
PHPBench 1.1.x
t-test1 2017-01-13
basic memory allocation
(less is better)
1 Thread: 21.88
2 Threads: 343.97
1 Thread: 27
2 Threads: 10
Median
1-test1-1.0.x
iPerf 3.7
TCP - parallel 20
a networking test
(less is better)
94719Average
iPerf 3.7

What I can conclude here is that overall results varied from poor to terrible. Except for the Encription and MD5 tests, were it did very well somehow.

There is a good cross-benchmark from 2014 , but it is for Ubuntu on QEMU/KVM. It's not Proxmox, but same thing used in Proxmox Linux Containers. The QEMU/KVM performance losses to native install (bare metal), were all in the 5%-10% range, except for video encoding, were QEMU/KVM lost 27% of performance, and file system, were it lost 40%-50% of performance.

An this user gives some tips on how to set a Windows QEMU/KVM to perform "almost as native", but does not show any performance numbers.
 
  • Like
Reactions: Aubs
In addition to the excellent post from @Dunuin:

Hugepages also make a big difference in performance.

I don't get why you want to have those numbers. It does not make sense to compare a native Windows with PVE with only one Windows VM. This is stupid and pointless with respect to virtualization. Normally you will have at least two VMs or 20, or 100 running at the same time on the same hardware and a native Windows is just one and you have 20 VMs, so it's 20x better than before. If you want one VM with the best performance, just drop the virtualization layer.

You're also comparing apples and oranges. Just use "snapshots" as one feature of any virtualization software: You cannot compare real hardware with that, due to the fact that real hardware cannot be snapshottet like a VM. Also with perfect caching and virtual hardware in place, you will exceed the performance of a single machine, e.g. you have a PVE host with two VMs. You can copy files between those two VMs faster than you would be able to copy via your real network card, so in that specific benchmark, two VMs will easily outperform your hardware. The same is true for Host-caching of VM disk data. You will get huge MB/s and IOPS, because your VM thinks that everything is allright, which is not the case. You just wrote to memory.

Also: the fastest possible solution is the worst for virtualization, e.g. if you passthrough a GPU, you cannot migrate your VM. If you passthrough your CPU via host-setting, you cannot migrate your VM. Having non-migratable machines is a no-go in a virtualized environment, at least in my datacenters.

But to have also some useable information (at least with Linux):
I benchmarked iperf3 recently in this post and the difference was measureable, but not that huge. Also the openssl benchmark was in the same ballpark (all Debian Bullseye):

PVE host

Code:
root@proxmox7 ~ > openssl speed
Doing md4 for 3s on 16 size blocks: 15082305 md4's in 2.99s
Doing md4 for 3s on 64 size blocks: 11818654 md4's in 3.00s
Doing md4 for 3s on 256 size blocks: 7060570 md4's in 3.00s
Doing md4 for 3s on 1024 size blocks: 2650970 md4's in 3.00s
Doing md4 for 3s on 8192 size blocks: 389010 md4's in 3.00s
Doing md4 for 3s on 16384 size blocks: 197045 md4's in 3.00s
Doing md5 for 3s on 16 size blocks: 22056558 md5's in 3.00s
Doing md5 for 3s on 64 size blocks: 12751812 md5's in 3.00s
Doing md5 for 3s on 256 size blocks: 5604167 md5's in 3.00s
Doing md5 for 3s on 1024 size blocks: 1727080 md5's in 3.00s
Doing md5 for 3s on 8192 size blocks: 232040 md5's in 3.00s
Doing md5 for 3s on 16384 size blocks: 116375 md5's in 3.00s
Doing hmac(md5) for 3s on 16 size blocks: 9318934 hmac(md5)'s in 3.00s
Doing hmac(md5) for 3s on 64 size blocks: 7017746 hmac(md5)'s in 3.00s
Doing hmac(md5) for 3s on 256 size blocks: 4138154 hmac(md5)'s in 3.00s
Doing hmac(md5) for 3s on 1024 size blocks: 1558290 hmac(md5)'s in 3.00s
Doing hmac(md5) for 3s on 8192 size blocks: 227865 hmac(md5)'s in 3.00s
Doing hmac(md5) for 3s on 16384 size blocks: 115458 hmac(md5)'s in 3.00s

VM

Code:
Doing md4 for 3s on 16 size blocks: 15377713 md4's in 2.99s
Doing md4 for 3s on 64 size blocks: 11702161 md4's in 3.00s
Doing md4 for 3s on 256 size blocks: 6915951 md4's in 3.00s
Doing md4 for 3s on 1024 size blocks: 2649831 md4's in 3.00s
Doing md4 for 3s on 8192 size blocks: 388427 md4's in 3.00s
Doing md4 for 3s on 16384 size blocks: 196536 md4's in 3.00s
Doing md5 for 3s on 16 size blocks: 22329003 md5's in 3.00s
Doing md5 for 3s on 64 size blocks: 12817894 md5's in 2.99s
Doing md5 for 3s on 256 size blocks: 5597967 md5's in 3.00s
Doing md5 for 3s on 1024 size blocks: 1725220 md5's in 3.00s
Doing md5 for 3s on 8192 size blocks: 231785 md5's in 3.00s
Doing md5 for 3s on 16384 size blocks: 116513 md5's in 3.00s
Doing hmac(md5) for 3s on 16 size blocks: 9415037 hmac(md5)'s in 3.00s
Doing hmac(md5) for 3s on 64 size blocks: 7148070 hmac(md5)'s in 3.00s
Doing hmac(md5) for 3s on 256 size blocks: 4145248 hmac(md5)'s in 3.00s
Doing hmac(md5) for 3s on 1024 size blocks: 1555774 hmac(md5)'s in 3.00s
Doing hmac(md5) for 3s on 8192 size blocks: 227428 hmac(md5)'s in 3.00s
Doing hmac(md5) for 3s on 16384 size blocks: 115272 hmac(md5)'s in 3.00s
 
  • Like
Reactions: Perrone and Dunuin
Hey @LnxBil

Thanks for your answer. But no, I'll explain why my question isn't stupid. :)

The main purpose of VMs is to be used in mainframes or in data-centers. But I'm talking from the home user perspective. You may think the home user is not important because individually they have very little resources, but collectively the home users are in a much greater number. And I believe there are potentially more resources available coming from home users rather than from data-center maintainers. I mean money (subscriptions) and development contribution for the Proxmox project. But that's if Proxmox meets the demands of home users instead of only data-centers. And what are the home user demands?

In the past, I used to make my machines dual-boot between Windows and Linux, only because I wanted to have some Linux experience. Actually, this would be a big plus for me professionally. But I ended up never having that experience because it was too impractical to use Linux. First, it took some time to reboot (back then, machines weren't so fast). Then when finally in Linux, I found myself bored. All of my stuff was stored and configured in the Windows machine, so in Linux I didn't have much of what to do. Also, some of my hardware was unavailable in Linux (mostly because I didn't have the time/will to figure out how to make it work).

Then, running Linux in a virtual machine inside Windows was the next thing I tried. But the virtualization software still didn't give Linux full access to my files (the file system) and it was really slow for serious stuff.

But now there is Proxmox, a game changer. When compared to other VM solutions that run on top of another existing OS, Proxmox is certainly unbeatable in terms of performance. And when compared to the dual-booting, bare-metal solution, there is no doubt that Proxmox makes the multi-OS experience much easier and flexible. But the question is: How much will that cost for the home user primary OS, the one that is used 95% of the time?

I chose to ask about Windows because I thought that would bring more answers. But my actual primary system is Mac, not Windows. And that brings me to this new home user demand. Most people can't afford the Mac hardware, which is 5 times more expensive than PC hardware. As you know, there are ways to work around that (OpenCore/Clover), to use Mac on a bare metal PC hardware. However, 99.9% of home users won't be able to do that. Either because their PC hardware is not suitable, or because they don't have the necessary knowledge to set that up. And here comes Proxmox again, as a facilitator. And this is were a new trend of VM usage is beginning.

PS: I don't think Apple is angry about that. They are earning more money, because more users are getting to know their system and their brand. Bill Gates also knows that Windows wouldn't be the most used OS if it wasn't for the unlicensed copies. Not everyone could afford the license. Some just didn't agree to pay for it. But 99% of the public ended up using it, which is what matters the most for Microsoft.

Now back to the topic of performance. I'm a newbie here. I see now that all KVM is when the VM uses a virtualized kvm64 CPU instead of the host CPU, right? So none of those benchmarks that I found are a good reflection of Proxmox best possible performance, right? That could explain why there are some users talking about <5% of performance loss with Windows or Mac over Proxmox (and I thought that couldn't be true as it seemed contradict the KVM benchmarks).

PS: I'll study what you wrote to understand better things like "cannot migrate your VM". And if there is any advantage for the home user to use a Snapshot instead of Mac's Time Machine, for instance.
 
  • Like
Reactions: LeleKimi
I think you are still missing some virtualization basics/limitations.

1.) VMs are isolated:

1.1.) -> VMs can't access the files of other VMs unless you both use a network share like SMB for that. Doesn't make much sense to have a super fast NVMe SSD when all data is accessed using a slow SMB share. And yes, it would be faster with just virtual disks so you don't got that network stack overhead but then VMs can't share files. So "fast" + "shareable between VMs" is a thing that won't work.

1.2.) -> VMs can't make use of any physical hardware unless you pass it through (and that won'T work with all hardware).
Lets say you got a GPU in that server and you install a Windows VM. That VMs can't access that GPU. There will be no accelerated video encoding/decoding, no 3D accerleration for games, no sound or display output. All you see is rendered in software by the CPU and even simple tasks like playing back a youtube video might not work without stuttering. You are basically back in the late 80s or early 90s where computers weren't using graphic cards doing all in CPU.
To give a VM the capabilities to use such a GPU you would need to passthrough that GPU into a VM. As soon as you passthrough a a device that device won't be longer usable by neither the host (server) itself nor any other VM. So in case you would want to use a Win11 VM and a Mac VM in parallel with video output you probably would need to have 3 GPUs in that server. 1 GPU for the host (for example the iGPU) + 1 GPU you passthrough into the Win VM + 1 GPU you passthrough into the Mac VM.
For some things you can use virtualized/emulated devices like virtual disks, virtual NICs and so on as a man in the middle between the VM and the physical hardware of the host. But this additional layer of abstraction also adds overhead and might not be as fast as the real hardware.

2.) PVE is headless. All you will see on your video output of that server is this black and white text console:
headless.png
You can't use that server to output any desktops of VMs or something like that. That would require you to:
a.) buy a GPU and use PCI passthrough like described in 1.2.
b.) hack your PVE and install a desktop environment to it so it could actually run grahpical programs like a web browser or a VNC/RDP/Parsec remote desktop client. Then you could open a remote desktop connection to those VMs (which might be quite laggy and full of artefacts).

3.) VMs can't really share ressources like RAM:
Lets say your host got 16GB of RAM. 2GB you want for PVE, 2GB you want to have always "available" so used for caching or free, you maybe need 4GB RAM for the ZFS storage. Now 8 of 16 GB are used without any VM running. So all whats left is 8GB RAM and you would need to decide how to assign them to the VMs. So could create 2 VMs with 8GB RAM each but then you won't be able to run those VMs at the same time. Or you alternatively give both VMs just 4GB of RAM and then you could run them in parallel.
So in case you don't want to run them in parallel it doesn't make sense to use virtualization. I would just dualboot a Win and Mac and be able to use the full 16GB RAM with that instead of virtualizing it and being limited to 8GB RAM.
 
Last edited:
  • Like
Reactions: Perrone
Good points by @Dunuin, in addition:

I also work on an iMac (2009 and still working with updates!!) and I don't think that the hardware is 5x more expensive, that is really not the case for good hardware. Even people like Linus (Tech Tips) tried to beat Apple with their fully-integrated machines like iMac Pro or MacPro and failed in building the same hardware cheaper than apple in an equally stylish fashion. But that is not the point. If you want to virtualize desktops, you should use a desktop virtualizer, e.g. Parallels, Fusion or VirtualBox (all on the Mac) and you will get near real-time performance, file sharing and fully desktop integration. PVE is an enterprise virtualization platform not suited for desktop virtualization (at least not to the extend that any desktop virtualization program works).
 
  • Like
Reactions: Perrone
Doesn't make much sense to have a super fast NVMe SSD when all data is accessed using a slow SMB share.

Yeah, I wasn't sure about that. Thanks for pointing it out.

So Proxmox can provide good performance through direct hardware access, which is called pass-through, but only for one OS at a time. And it can allocate a greater amount of memory to a given OS, but that's not automatic. Well, I guess the pass-through and memory allocations could be switched from one OS to the other quickly using a script, but all OSs would have to be shut down for that script to run. And that would defeat the purpose, killing agility.

I don't think that the hardware is 5x more expensive, that is really not the case for good hardware. Even people like Linus (Tech Tips) tried to beat Apple with their fully-integrated machines like iMac Pro or MacPro and failed in building the same hardware cheaper than apple in an equally stylish fashion.
When I said Mac hardware is 5x more expensive, I wasn't considering an "equally stylish fashion". Among the world's population that consumes computer hardware only a very small percentage can afford a "stylish fashion" for their desktop. So, I was considering the use of a PC's open platform for the end-user to build his own PC at the best ratio of performance-per-price he can. And this is the hardware that moves the greatest amount of money around the world. Users can use such "budget fashion" to build Mac on PC hardware to outperform a real Mac Mini or a iMac Pro, and the PC will certainly be times cheaper. And here is a proof with price and performance benchmark, which was taken from this video (just in case it seems hard to believe).

For those who live in a developed country, that may seem a "unfair" thing to do, but hey I live in Brazil, were minimum wage is 10x smaller than in US/Canada.

If you want to virtualize desktops, you should use a desktop virtualizer, e.g. Parallels, Fusion or VirtualBox (all on the Mac) and you will get near real-time performance, file sharing and fully desktop integration.

Ok, so the standard suggested solutions for individual users would be VMware Fusion, Oracle VirtualBox or Parallels Desktop, right?
And I guess Apples free Boot Camp app could be added to the list too.

And I see file sharing between Windows and Mac can be done in different ways, as explained here.

Yes, that seems to be the best way to go (or at least try). Native performance in Mac and full memory usage (while Windows is not running) and Windows running as a guest OS with minimum performance penalty, but less memory available. So thanks for pointing this out. So the most productive setting should be this one:

1649284599831.png

A downside is that I won't be able to ZFS. There is OpenZFS on OSx project, but it's not ready for latest Mac versions and won't work for windows. I wish I could find another alternative for the "special disk" feature of ZFS.

One thought I have is... What will happen if both OSs are accessing a storage unit at the same time, both with write access?
 
One thought I have is... What will happen if both OSs are accessing a storage unit at the same time, both with write access?
Shared storage only works on the file level. If you got a block storage and mount it to two OSs at the same time that would corrupt it.
If you want ZFS and shared storage between VMs you could build a dedicated NAS with for examlpe TrueNAS. You could then use iSCSI for your OS and programs and NFS/SMB shares for your data. That way you wouldn't need a single local HDD/SSD for your Mac machine at all and everything would be backed by ZFS. But of cause, that won't be fast and can't be compared to a local NVMe SSD.
 
Hey just found out how to create a real hybrid drive on Mac, as an alternative for ZFS's "special device".

Bash:
diskutil cs create Fusion disk0 disk1
diskutil cs list
diskutil coreStorage createVolume <hexadecimalIdentifier> jhfs+ "Name Of Drive" <size_or_100%>

Thanks for this video, which shows the step-by-step process to create an Apple's Fusion Drive with custom hardware.

As explained in this other video, the Fusion Drive technology even switches data back and forth to make sure most frequently accessed data is in the fast drive. I'll try this with NVMe + HDD.

And then this hybrid drive will be read-only available on a Windows OS that is installed through Bootcamp.
Or write access can be obtained with Paragon HFS+ for Windows paid driver, and it seems to work, as users say.
There is also the Mediafour’s MacDrive paid driver.
And Parallels paid solution already gives read/write access.
I'd have to test if those solutions recognize the fusion drive and handle it properly.
 
Nicest thing is that each OS will be bootable by itself to run standalone (using Clover), or can run as a guest OS, under Mac.
So this is the current plan:


1649297903507.png

Seems like a Clover dual boot could also be added to natively boot in each OS. But then I don't know if the fused drive will be accessible. Perhaps yes. Linux can already write on HFS+ if journaling is disabled.
 
Windows 10 in a VM on my home proxmox install if I am been utterly honest feels as snappy as my baremetal windows on a 9900k. The proxmox host has a 5600G currently.

I do disable all cpu mitigations on my proxmox host, they a waste of time for my usage.

It is most defenitly faster than bare metal on my laptop.

I have a novabench tool I have ran through the history of the machine.

There is people using proxmox passing through discrete GPU's getting circa 90% of bare metal in games. The biggest mistake people make is using default proxmox settings, the defaults are awful, dont use LSI controller, dont use KVM64 lol. Even QEMU64 is much faster than KVM64, but generally you should use host cpu type, and virtio for everything else (epyc cpu type is ok for ryzen chips).

Quoting myself from another forum.

Guys a little history lesson of my hypervisor/NAS rig. Throughout its life I have had one consistent benching tool called novabench used during the machine's life which gives a CPU score (it also tests ram which over time has skyrocketed performance, but here just concentrating on CPU).

All except 5600G used host CPU passthrough.

It started off as a i5 750 overclocked to 3.6ghz, power hungry beast. Running ESXi
By the time it was retired the CPU benched at a score of 307.

I then upgraded the platform to AM4, Asrock B450 Pro 4 board, and 2600X CPU.
Still on ESXi it was scoring circa 420 with stock bios settings and stock ESXi settings, 380 with CBP off (the daily config) and around 440 with agressive power config in ESXi and CPB on. I did suspect ESXi had a very unoptimised scheduler for Ryzen and wasnt convinced CBP was working properly. But ESXi doesnt let you view CPU stats so it was never confirmed.
VMs felt noticeably more responsive in tasks, so was a visibly faster platform.

I then switched over to Proxmox keeping same hardware, advantages were really that in datacentres Proxmox was much better for me, and I expect would have better scheduler support and the ability to disable all CPU mitigations.

It seems I never did a bench in novabench with CPB enabled on the 2600X, so no best case score.
But with CPU mitigations off, and in proxmox, CPB off, the CPU averaged a whopping 670, considerably faster than the ESXi setup.
At this point I felt windows VM was easily faster than my laptop bare metal.

Now fast forward to today, not long ago I switched out the CPU for a 5600G, and the ram is running a little faster 3000 vs 2667mhz. Note am using EPYC cpu type instead of 'host' as qemu cant handle all Zen3 new instructions well.
Novabench CPU score with CPB off is ......... circa 900.
Windows VM feels faster than my 9900k bare metal lol. Granted it has less stuff installed bogging it down but the 5600G is a crazy good chip. AMD really have come a long way
 
Last edited:
  • Like
Reactions: Perrone
And I guess Apples free Boot Camp app could be added to the list too.
No, it's not a virtualization option, it is just booting another OS (dual-boot). In my own experience, If you're running a mac, you don't dual boot even if you can. I tried it in the past, but it is worth the hassle in my day-to-day stuff. My Mac is always on standby and if you're used to have an instant working environment in which you can work, you are used to this. I need at least 2 minutes to boot in another OS (I also run PVE on my iMac on an internal ZFS pool), but I seldomly boot it. I only boot my Mac after installing security updates.

So going with a virtualized solution and just share your files over the virtualized guest intergration filesystem all these virtualized products suppy, or just put then on another machine.

A downside is that I won't be able to ZFS. There is OpenZFS on OSx project, but it's not ready for latest Mac versions and won't work for windows. I wish I could find another alternative for the "special disk" feature of ZFS.
The driver also works perfectly on the newest verison of MacOS, I use it all the time, yet you won't be able to use it as a boot device, so you need a APFS volume for installing your OS. I can use my zfs pool in PVE, but Windows is another story. There is an experimental ZFS driver, but I do not know the quality of it.

I collected a lot of hardware over the last 4 decades, so I'm using a system for each OS and store all my files on a self-build PVE-based NAS or on ZFS disks. I can recommend buying used enterprise hardware, which so cheap and yields a lot of bang for the buck - even for MacOS. I tried virtualizing it on my 2009 XEON machines and it was very, very fast with GPU passthrough of a supported nvidia workstation card.
 
  • Like
Reactions: Perrone
@LnxBil , yes, thanks for noticing Boodcamp is not virtualization. It just partitions the disks for the Winodws install, then installs necessary drivers on Windows so that Mac hardware can be accessed. Including a HFS+ read-only driver.

The perfect scenario for me (and probably for most home users) is to:
  1. Have each OS installed in a separate partition and being able to boot into any of them to have full performance
  2. and at the same time being able to run each OS through a VM on top of my main OS (which will be the Mac in my case) with minimal performance loss
  3. and, create a hybrid drive which can be accessed from any of those OSs with read/write access.
And for the VM solutions:
  • Parallels Desktop 17: $ 80
  • VMware Fusion 12 Pro: $ 200
  • VirtualBox: Free
It seems like the paid ones (Parallels and VMware) attend to all requirements. While VirtualBox stays behind in performance. As explained here, Parallels and VMware run close to native performance, no more than 5% of performance penalty. And Parallels copies files faster than VMWare Fusion:
1649361900870.png
It's easy for those VM solutions to provide access to APFS and HFS+ because they just virtualize devices which are already being managed by Mac OSx. However, when booting directly into Windows or Linux, I don't think the fusion drive will work at all (not even read access). Anyway, this are some options:

$ 20 for Paragon HFS+ for Windows - but doesn't support Fusion Drive
native HFS+ support in Linux (but probably not for Fusion drive)

$ 49 for Paragon APFS for Windows (3 PC license)
$ 40 for Paragon APFS for Linux
free APFS-fuse for linux

Btw, the activity of both devices coupled to a Fusion Drive can be monitored with: iostat -d disk0 disk1

It would be really good to have a multi-platform hybrid drive solution. But it seems none is ready yet.
 
I was investigating performance improvements, found this topic, decided to point out that original question is not stupid (about virtualizing Windows VM while only 1 instance is running).

I'm using hypervisors to do one single thing - accumulate all IOPS/FPU/ALU/GPU power in one machine (especially if it can be out of my room during summer).
So the goal is not to share resources between VMs at the same time, it's about flexibility to put those resources to a different use on demand, which might require different OSes and hosts.

And, requirements to infra are:
- free, or a single lifetime purchase for reasonable price (must)
- x86-64 support (must)
- No vCPU restriction or CPU power restriction (must)
- GPU Passthrough available (must)
- Nested Virtualization available (must)
- Good networking (must)
- Snapshotting features, ability to freeze, clone states of your work environments (must)
- Ability to move your environment on laptop temporary for portability (must)

Just saying how Proxmox can be used, that's all.

Well, receive my user input on performance: currently on Ryzen 5950X, avg bare metal Cinebench R23 score on the internet varies from 26800 to 28600, in single-running VM I'm getting 24600.
Assuming best case scenario for virtualization, 24600 / 26800 ~= 0.92, so seems about 8% CPU performance loss.
Worst case scenario is 24600 / 28600 ~= 86, which is 14% CPU performance loss.
Config is: Proxmox with mitigations on, CPU:host, Win 10 updated as of May 2022 + Cinebench R23 executable.


I wouldn't rely much on benchmarks VM vs BM made before 2018 when meltdown/spectre were discovered, world has changed, as well as CPU architectures.

Sad about performance loss, but still only hypervisors can do the job.
 
Last edited:
For what it is worth, I rushed out maybe foolishly implementing Proxmox I am a newbie here I am using Proxmox 7.2 with 6 windows server virtual machines, which are mainly DC and files servers. the most intense one is a security camera system with over 60 cameras streaming real-time video and audio collectively 300 Mbit/s 24/7 it has two ISCSI block disks attached 1 TB and 100 GB . writing 40 GB per day to these disks. This virtual machine has 16 GB of memory assigned to it and 7 cores assigned. Pretty much running 90% of the ram assigned and 70% of the CPU. however, the users are never supposed to connect to the server because they have workstations, however out of convenience the users they love to view all the cameras on the server, I try to discourage but they do it anyways. So far it bares up pretty well under the abuse and I do not see any significant performance degradation verse bare metal. the host machine has 48 cores(48 x AMD EPYC 7402P 24-Core Processor) 128 GB of ram but I have not assigned any more resources to the virtual machine because I think this application will just use all the resources assigned to it. I also have some Iperf3 tests as well I am pretty much able to transfer close to 1 GB/s across the network to the windows virtual machines