oVirt Replacement with Proxmox

TimmiORG

New Member
Oct 23, 2023
4
0
1
Hi Community,

please note that we are planning to replace our oVirt setup with 7 hosts and 330 VMs.
The VMs are located on iSCSI which is shared among the hosts.

From the documentation it looks like that Proxmox is not capable of using iSCSI luns in a similar way like oVirt. Means features like thin-provisioning and snapshots are not working on iSCSI shared storage.

At least the snapshot feature is mandatory and we also want to reuse our existing infrastructure (HPE MSA 2050).

My current understanding is that Proxmox does not support this. Do you know if there are plans to change this in future or do you have any recommendation how to proceed?

Best regards
Timmi
 
Hi @ness1602 ,

thank you for you response.
Never used CEPH but I guess this is kind of local storage in each host which will replicate the data among the hosts to make live migration possible, right?
Can I make use of my SAN in that case?
 
CEPH is a distributed storage, so every host has local disks which are synced to other nodes (usually 10gb/s ). Unfortunately ,SAN cannot be used in any meaningful way.
 
hm.....
this would require quite an investment to change the HW.
But OK thank you for your answer.
 
From the documentation it looks like that Proxmox is not capable of using iSCSI luns in a similar way like oVirt. Means features like thin-provisioning and snapshots are not working on iSCSI shared storage.
You are correct. The oVirt storage management in regards to shared iSCSI/FC LUNs is completely different from PVE and due to that they provide different native feature support for iSCSI/block storage.
At least the snapshot feature is mandatory and we also want to reuse our existing infrastructure (HPE MSA 2050).
At this time and for foreseeable future you will not have feature similarity in iSCSI block storage support between the two Hypervisors. If you must reuse your existing SAN infrastructure and migrate to PVE, your primary logical choice is to implement a Clustered Filesystem, such as OCFS2.
This solution is not directly supported by PVE and your organization will be responsible for its configuration, support and maintenance.
You could also find that the resulting layering (SAN>OCFS2>Qcow>Guest) may not be up to par for your performance expectations.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Hi @bbgeek17
thank you for your answer.
I'm really not sure how to proceed.
I guess we will need to perform more research to see if there are other options out there.
Losing the SAN investment would not make much sense.

Just a quick question for Ceph. I understood that per default the data will be written three times (assuming a cluster with three nodes.).
Means I need 3 times the disk capacity of my target.
Is this more less correct?
 
I'm really not sure how to proceed.
I guess we will need to perform more research to see if there are other options out there.
Losing the SAN investment would not make much sense.
Offloading thin/snapshot logic into the storage is probably the most optimal solution. This is why we created our own PVE storage plugin that utilizes native PVE API. Unfortunately for you, chances that someone will create an enterprise grade, supported plugin for MSA are pretty low.
You are essentially boxed in by your existing infrastructure and requirements to a very limited choice selection.

Just a quick question for Ceph. I understood that per default the data will be written three times (assuming a cluster with three nodes.).
Means I need 3 times the disk capacity of my target.
Is this more less correct?
I am not an expert on Ceph, but I do think your are correct. There are a few calculators available online that can help as well: https://florian.ca/ceph-calculator/


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Yes, if you are using ceph's 3/2 you can count on only 30% available( ideally even less,because you don't want to filll your pools abouve 80%)
 
  • Like
Reactions: TimmiORG
You are correct. The oVirt storage management in regards to shared iSCSI/FC LUNs is completely different from PVE and due to that they provide different native feature support for iSCSI/block storage.

Hi, Sorry to bump this old thread,
It seem than ovirt is doing qcow2 on raw lun to manage snapshot, and have a daemon to extend lun dynamicaly (for thin-provisiong, and for qcow2 snapshot size).

I don't known if it could be great to integrate this in proxmox.
(I have a lot of customer with vmware setup currently migrating, and a lot of them have a block san without any api with vmfs on top).

I have see 2-3 forum users using gfs2 as fs with success.
(Personnally I have used ocfs2 10 years ago, but I remember to have a lot of lock problem on node failure)


I'll try to implement differents method in coming months to see what's the best way.
 
It seem than ovirt is doing qcow2 on raw lun to manage snapshot, and have a daemon to extend lun dynamicaly (for thin-provisiong, and for qcow2 snapshot size).
Thank you for reporting back. I wondered how they did it. Do they used LVM on top of the LUN or some api to change the presented iSCSI luns?
 
As someone moving away from oVirt, I never liked how they did storage. I get that they seem to have a lot of nice features, but it's overly complicated. This makes for not a great time when you have to fix failures at the storage layer.
 
There are always trade offs. We looked at OVirt and their external iSCSI support was very rudimental, without ability to write a custom plugin like PVE does.
We think PVE model is much more flexible in that respect. OVirt does claim support for Cinder/Openstack integration, which allowed us to re-use our existing work of integrating with Openstack.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!