Worth swapping vmware pvscsi to virtio scsi?

Proximate

Member
Feb 13, 2022
219
12
23
64
I imported a bunch of vms from esx and noticed all have the vmware scsi controller.
I've been told this is not efficient and that I should change these to virtio but how?

I'm wondering what the steps are to convert the controllers as I need these vms to be as efficient as possible of course.
More importantly, is there a compelling reason for doing so, a real worthwhile speed/performance improvement?
 
Last edited:
I should have mentioned that they are all centos-8 but I guess the question is also, is there a benefit, a speed/performance improvement worth doing this?
 
Last edited:
I should have mentioned that they are all centos-8 but I guess the question is also, is there a benefit, a speed/performance improvement worth doing this?

Hello, sorry for necroposting but I have the exact same question and I can't find an answer online.

1- As Poximate said, is there a real benefit replacing VMware PVSCSI controller on a Rockylinux 8 (or similar) with VirtIO SCSI ?
2- If yes, and I think the answer is YES, how to do it? I found how to do it with windows (and it works), and works perfectly if I install a fresh Rockylinux but if I try to replace controller on a converted VM, it doesn't boot and stop with the attached error, during the boot process. I have no PATA device on the VM but this happen with all the Rockylinux VM I'm converting. No error with debian, I can replace controller without issue.
Already removed open-vm-tools and installed qemu-guest-agent, no difference.

(setting VMware PVSCSI controller everything works)
(VM moved from esxi 6.7 to latest Proxmox VE 8)

Thanks.
 

Attachments

  • virtio.JPG
    virtio.JPG
    38.1 KB · Views: 23
And another problem, on a just migrated win server 2016, the system boot only if I select vmware PVSCSI as a controller, anything else BSOD: ide, sata, virtio. I can't apply the trick booting with ide and adding a small virtio disk to add drivers. Any idea?
VM has EFI bios (maybe this is why i can't boot from ide?) and virtio drivers are already installed with the exe.

In my opinion the vmware migration topic is not well covered on the guide.

Thanks
 
No one knows how to solve these problems? We almost completed our datacenter migration to proxmox but many server still use PVSCSI instead of virtio. Thanks.
 
No one knows how to solve these problems? We almost completed our datacenter migration to proxmox but many server still use PVSCSI instead of virtio. Thanks.
Did you install the latest Windows VirtIO (SCSI) drivers before changing the virtual hardware from PVSCSI to VirtIO SCSI (Single)?
 
Yes.
The only Server2016 where this worked is where we use default Seabios instead of UEFI.
In our typical configuration with OVMF (and EFI disk added) the VM boots only with PVSCSI. Any other config (like in the screenshot attached) means blue screen, "inacessible boot device".
 

Attachments

  • proxmox bsod.JPG
    proxmox bsod.JPG
    42 KB · Views: 15
I finally solved so I'm going to update this thread for future readers:

- changed boot disk as IDE
- startup - > press F8 before blue screen -> safe mode
- vm should boot fine in safe mode
- remove this (not sure if this is the reason of the problem or not to be honest but this is what i've done)

problema pannello.JPG


- reboot normally
- vm should boot fine in normal mode
- shutdown
- add a small secondary disk in virtio scsi
- boot -> disk is seen in computer management -> shutdown
- detach and remove the small virtio disk
- detach the main primary disk and reattach as virtio scsi


Still not sure about performance improvements but at this point I just have faith in who says virtio is the best.
bye
 
After spending way too much time on this, I gave up and just rebuilt all of the Windows machines that could not be converted.
For such a great product, you'd think there would be some simpler way but there isn't. Searching the net and the posts, you find countless people having this problem, trying countless suggestions that just waste many hours.

There needs to be a simpler way.
 
Hello, sorry for necroposting but I have the exact same question and I can't find an answer online.

1- As Poximate said, is there a real benefit replacing VMware PVSCSI controller on a Rockylinux 8 (or similar) with VirtIO SCSI ?
2- If yes, and I think the answer is YES, how to do it? I found how to do it with windows (and it works), and works perfectly if I install a fresh Rockylinux but if I try to replace controller on a converted VM, it doesn't boot and stop with the attached error, during the boot process. I have no PATA device on the VM but this happen with all the Rockylinux VM I'm converting. No error with debian, I can replace controller without issue.
Already removed open-vm-tools and installed qemu-guest-agent, no difference.

(setting VMware PVSCSI controller everything works)
(VM moved from esxi 6.7 to latest Proxmox VE 8)

Thanks.
I didn't get the exact error, but similar problem...
My fix: Boot in rescue mode with the desired controller then "yum reinstall kernel".

Just doing testing... I think I'll run some benchmarks and see if it's worth the bother.
 
BTW: If the reinstall runs fast, it probably didn't relink. You might have to first remove old kernel and then reinstall.

Looks like it's worth it. The PVSCSI works great for emulation to get up and running, but performance is terrible compared to running in vmware...
With virtio:
BS=512 ReadWriteRatio=0 IODEPTH=4 RW= write: IOPS=90.8k, BW=44.4MiB/s (46.5MB/s)(26.0GiB/600002msec); 0 zone resets
BS=512 ReadWriteRatio=100 IODEPTH=4 RW= read: IOPS=106k, BW=51.9MiB/s (54.4MB/s)(30.4GiB/600002msec)
BS=16384 ReadWriteRatio=0 IODEPTH=4 RW= write: IOPS=66.3k, BW=1037MiB/s (1087MB/s)(607GiB/600001msec); 0 zone resets
BS=16384 ReadWriteRatio=100 IODEPTH=4 RW= read: IOPS=79.8k, BW=1246MiB/s (1307MB/s)(730GiB/600001msec)
BS=65536 ReadWriteRatio=0 IODEPTH=4 RW= write: IOPS=23.5k, BW=1471MiB/s (1543MB/s)(862GiB/600004msec); 0 zone resets
BS=65536 ReadWriteRatio=100 IODEPTH=4 RW= read: IOPS=35.4k, BW=2214MiB/s (2321MB/s)(1297GiB/600002msec)
BS=512 ReadWriteRatio=0 IODEPTH=32 RW= write: IOPS=94.6k, BW=46.2MiB/s (48.4MB/s)(27.1GiB/600004msec); 0 zone resets
BS=512 ReadWriteRatio=100 IODEPTH=32 RW= read: IOPS=107k, BW=52.3MiB/s (54.8MB/s)(30.6GiB/600004msec)
BS=16384 ReadWriteRatio=0 IODEPTH=32 RW= write: IOPS=67.2k, BW=1050MiB/s (1101MB/s)(615GiB/600004msec); 0 zone resets
BS=16384 ReadWriteRatio=100 IODEPTH=32 RW= read: IOPS=80.8k, BW=1262MiB/s (1323MB/s)(739GiB/600004msec)
BS=65536 ReadWriteRatio=0 IODEPTH=32 RW= write: IOPS=23.2k, BW=1447MiB/s (1517MB/s)(848GiB/600014msec); 0 zone resets
BS=65536 ReadWriteRatio=100 IODEPTH=32 RW= read: IOPS=36.9k, BW=2304MiB/s (2415MB/s)(1350GiB/600009msec)

With vmware driver:
BS=512 ReadWriteRatio=0 IODEPTH=4 RW= write: IOPS=19.0k, BW=9493KiB/s (9721kB/s)(5563MiB/600001msec); 0 zone resets
BS=512 ReadWriteRatio=100 IODEPTH=4 RW= read: IOPS=22.4k, BW=11.0MiB/s (11.5MB/s)(6573MiB/600001msec)
BS=16384 ReadWriteRatio=0 IODEPTH=4 RW= write: IOPS=17.5k, BW=274MiB/s (287MB/s)(160GiB/600003msec); 0 zone resets
BS=16384 ReadWriteRatio=100 IODEPTH=4 RW= read: IOPS=20.8k, BW=325MiB/s (341MB/s)(190GiB/600001msec)
BS=65536 ReadWriteRatio=0 IODEPTH=4 RW= write: IOPS=15.2k, BW=951MiB/s (997MB/s)(557GiB/600002msec); 0 zone resets
BS=65536 ReadWriteRatio=100 IODEPTH=4 RW= read: IOPS=17.8k, BW=1111MiB/s (1165MB/s)(651GiB/600001msec)
BS=512 ReadWriteRatio=0 IODEPTH=32 RW= write: IOPS=18.9k, BW=9452KiB/s (9679kB/s)(5538MiB/600002msec); 0 zone resets
BS=512 ReadWriteRatio=100 IODEPTH=32 RW= read: IOPS=22.6k, BW=11.1MiB/s (11.6MB/s)(6635MiB/600002msec)
BS=16384 ReadWriteRatio=0 IODEPTH=32 RW= write: IOPS=17.6k, BW=275MiB/s (289MB/s)(161GiB/600001msec); 0 zone resets
BS=16384 ReadWriteRatio=100 IODEPTH=32 RW= read: IOPS=20.9k, BW=327MiB/s (343MB/s)(192GiB/600003msec)
BS=65536 ReadWriteRatio=0 IODEPTH=32 RW= write: IOPS=15.8k, BW=989MiB/s (1037MB/s)(579GiB/600003msec); 0 zone resets
BS=65536 ReadWriteRatio=100 IODEPTH=32 RW= read: IOPS=19.2k, BW=1203MiB/s (1262MB/s)(705GiB/600003msec)

I think it's worth it if there is at least moderate level of disk activity...
The above tests were on an R730 recently obtained for home lab from ebay and a old Intel NVME pcie card running in a LVM-thin.

I didn't bother monitoring cpu utilization during test as the IOPs are poor enough that it would likely be moot in most cases... (ie: want higher IOPs even if more cpu/iop).

I do get much better performance with vmware and the driver under vmware... so it's more an emulation of that hardware than the driver.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!