Virtual disk much bigger on real disk

Aug 5, 2018
12
0
1
31
I setup a Proxmox 6.2-10 machine. It consists of a few SSDs and 4x 10TB drivers. I created a ZFS raid-z2 pool (named hdd) which gives me a usable 17 TB. So far so good.

One of the VMs needs access to a 10TB volume. Therefore, I add a hard drive to it, assign the ZFS storage pool hdd and set the size to 10TB. The disk is created successfully and I can access it in the guest. However, the 10TB vm disk image is 15TB on the real ZFS storage. Why is this? I didn't notice similar behaviors with smaller sizes.

I can only assume that this is due to some overhead such as paging or index data? Is there any workaround for this? I need to be able to assign a 25 TB virtual drive to another guest.
One solution I see is creating multiple smaller virtual disks and then using ZFS on the guest to create a pool of them but there would be a lot of overhead induced by the guest's ZFS layer that wouldn't do any good.
 
Is there a (good) way that would allow me to keep the raid-z2 pool but "passing" it directly to my guest (FreeBSD)? That way the guest would be able to access the pool without the need for the virtual disk image.

I don't necessarily need other VM's to access the same pool - they have separate storage available.
 
Is there a (good) way that would allow me to keep the raid-z2 pool but "passing" it directly to my guest (FreeBSD)? That way the guest would be able to access the pool without the need for the virtual disk image.

I don't necessarily need other VM's to access the same pool - they have separate storage available.

Only by passing the disks of your pool through to FreeBSD. ZFS itself is a local filesystem and cannot be managed by two entities.
 
Only by passing the disks of your pool through to FreeBSD. ZFS itself is a local filesystem and cannot be managed by two entities.
How would I do this? The Hardware tab of the VM does not seem to allow me to pass through individual, physical drives.
I won't be able to pass the entire storage controller as a PCIe device to the VM as other storage devices are connected on the same controller.

How about using ceph instead? This would be a single node setup but my initial research suggests that ceph is perfectly fine with running just with one node.

Are there any practical implications? Does anybody have experience running a single node ceph "cluster" with proxmox?
I am running a three node ceph cluster for about two years now and it has been a good experience so far.
 
How would I do this? The Hardware tab of the VM does not seem to allow me to pass through individual, physical drives.

Just set e.g.

Code:
sata0: /dev/sdc
sata1: /dev/sdd

I won't be able to pass the entire storage controller as a PCIe device to the VM as other storage devices are connected on the same controller.

That would be indeed a better option, but the first also works - kindof.

How about using ceph instead? This would be a single node setup but my initial research suggests that ceph is perfectly fine with running just with one node.

and what would be the benefit of that? If you're not accessing from two clients, it is as good as having a local storage and you're not solving any problems on the FreeBSD front, would you? AFAIK, FreeBSD does not have CEPH.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!