Drive limitations on Proxmox VM? - Thanks in advance.

Southeast

New Member
May 23, 2021
10
0
1
My proxmox boots from raid ssd.
I have 36 HDD attached, 14TB each.

For the VM, (FREENAS installed on SSD) I want to add a fast as possible small raid0 (ZSF SETUP) drive on the 36 HDD.
So 100mb each drive gives me a 3tb drive to use as a ZSF stripe for a super fast temp drive for another machine.

On another machine, (ubuntu VM) I want SSD for the OS, map the raid0 drive from freenas as a temp drive, and add the leftover 496TB in the 36 HDD for space for storage.

Freenas is working, but when I try to add drives, it limits me to 30 drives on VirtioSCSI, I can add more but will only let me add them as VirtioBlock.

I thought VirtioSCSI had nearly unlimited capacity to add drives, if so, what did I do wrong?

Can I keep going by adding more Virtioblock drives and complete the raid0 plus add what is left of the other 36HDD? It's a lot of time adding all these drives then find out it won't work so I am asking here, anyone with freenas & zfs stripe experience to help a guy out?

My Proxmox Hardware:
CPUIntel Xeon 4214R - 24c/48t - 2.4 GHz/3.5 GHz
RAM384 GB ECC 2666 MHz
System disks 2×480 GB SSD SATA
Data disks 36×14 TB HDD SAS


Thanks in advance.
 
Wouldn't it be more usefull to use PCI passthrough to passthrough 2x 16 port HBAs + 1x 8 port HBA to your FreeNAS VM? That way your FreeNAS VM could directly/physically access the 36 HDDs and you wouldn't get all the virtualization overhead created by virtio.
 
Last edited:
  • Like
Reactions: Southeast
Wouldn't it be more usefull to use PCI passthrough to passthrough 2x 16 port HBAs + 1x 8 port HBA to your FreeNAS VM? That way your FreeNAS VM could directly/physically access the 36 HDDs and you wouldn't get all the virtualization overhead created by virtio.
Thank you for the reply, I am sure it would, didn't know it could, but I would need a step by step instruction on that, I am guessing install guest utils on the vm and config somewhere on proxmox side? Very new to all this proxmox pass-through stuff.
 
There is a tutorial on how to enable IOMMU: https://pve.proxmox.com/wiki/Pci_passthrough
And you would need to buy some HBAs (no Raid-Cards) and verify first that these are supported by FreeBSD/FreeNAS (because they are directly controlled by the guests kernel) and check if someone tested that PCI passthrough for these models works fine. LSI HBAs should work best.
I'm on an OVH server, software raid. I have no control over putting specialized equipment there, the support is the worst of any provider I have used since 1996. Everything there seems to be virtualized too, a mess to setup and all the documentation is outdated there. I have some idea of what equipment is used, have not probed everything, poked around in the bios too, but this is all I have to work with.
 
Does anyone else have an answer? I can't swap out hardware, need to know about this virtio thing.
Thanks
 
If you're on hosted hardware then virtio will probably not be available to you.
 
If you're on hosted hardware then virtio will probably not be available to you.
It's a dedicated server. Top of the line, I have full access, even to the bios, just adding more equipment to it other than what I listed may be problematic. I don't have the knowledge or time to do passthrough. I can add up to about 30 drives on VirtioSCSI, but it is supposed to do more than that. If it can add up to 30 then doesn't that mean virtio is available, just not giving me as much as i need? Bad config?
 
Last edited:
You can pass the drives through to a VM like freeNAS (TrueNAS now) although this will introduce some small overhead performance wise

see https://pve.proxmox.com/wiki/Passthrough_Physical_Disk_to_Virtual_Machine_(VM)
As i have repeated above, I don't have time to redo everything using passthrough and chance messing something else up, not to mention TrueNas wants the entire hard drive, I only want a small slice of each one to make a super fast striped drive. I see no settings in Truenas to create a pool using only a portion of the drives. If there is such a place, I would love to know.

Mainly looking to use proxmox like it is supposed to work out of the box by adding drives to a vm via the gui. My machine has tons of power, not concerned with overhead in any configuration as long as it is easy and fast through the gui. The vm limits VirtioSCSI devices to about 30, should be nearly unlimited and looking for an answer why this is happening.

Thanks.
 
As i have repeated above, I don't have time to redo everything using passthrough and chance messing something else up, not to mention TrueNas wants the entire hard drive, I only want a small slice of each one to make a super fast striped drive. I see no settings in Truenas to create a pool using only a portion of the drives. If there is such a place, I would love to know.
The link isn't about real physical passthrough (PCI passthrough of the HBA). Its about peusdo passthrough where virtio is using a full drive or only a partition and virtualizing it so it can be used inside the VM. So its using virtio like with virtual disks (zvols) but using the drive/partition and not a zvol on a ZFS pool. That way you could create 2 partition on each drive and bring them into your TrueNAS VM. That way you dont need to run a ZFS ontop of a ZFS which would create a huge overhead and waste alot of RAM. But I'm not sure if this will work with more than 30 drives because it it also using virtio.
Mainly looking to use proxmox like it is supposed to work out of the box by adding drives to a vm via the gui. My machine has tons of power, not concerned with overhead in any configuration as long as it is easy and fast through the gui. The vm limits VirtioSCSI devices to about 30, should be nearly unlimited and looking for an answer why this is happening.
Your hardware isn't that great because this is such a amout if storage. Rule if thumb is 4GB + 1GB RAM for each 1 TB of raw storage or 4GB + 5GB of RAM for each 1 TB if you want to use deduplication. So that would be 509GB RAM (or 2525GB RAM with deduplication) just for ZFS on the host and you want to run ZFS ontop of ZFS so you need the same amount inside the TrueNAS VM again. Now you need 1018 or 5050GB RAM just for ZFS. Its just a rule of thumb and it will work with way less RAM but the more RAM you got the faster the Pool will be.
Dont underestimate ZFS. Overhead can sum up really badly.
 
Last edited:
  • Like
Reactions: Southeast
The link isn't about real physical passthrough (PCI passthrough of the HBA). Its about peusdo passthrough where virtio is using a full drive or only a partition and virtualizing it so it can be used inside the VM. So its using virtio like with virtual disks (zvols) but using the drive/partition and not a zvol on a ZFS pool. That way you could create 2 partition on each drive and bring them into your TrueNAS VM. That way you dont need to run a ZFS ontop of a ZFS which would create a huge overhead and waste alot of RAM. But I'm not sure if this will work with more than 30 drives because it it also using virtio.

Your hardware isn't that great because this is such a amout if storage. Rule if thumb is 4GB + 1GB RAM for each 1 TB of raw storage or 4GB + 5GB of RAM for each 1 TB if you want to use deduplication. So that would be 509GB RAM (or 2525GB RAM with deduplication) just for ZFS on the host and you want to run ZFS ontop of ZFS so you need the same amount inside the TrueNAS VM again. Now you need 1018 or 5050GB RAM just for ZFS. Its just a rule of thumb and it will work with way less RAM but the more RAM you got the faster the Pool will be.
Dont underestimate ZFS. Overhead can sum up really badly.

Thank you for the in depth reply.

I didn't start off with a zfs, just used the gui for the storage of the node, add disk directory, partition/formated 8gb each drive with ext4 (100MB didn't work). Then I opened Truenas, added the drives as ZFS stripe and shared as a nfs to another VM in proxmox. Performs as slow as a single drive, I thought perhaps it was because OVH has everything networked, including the drives, they virtualize everything. Their internal network is about as slow as the HDD, so that's what made me wonder if this is the cause of the slow stripe or did I just miss a step in configuration.

Let me see if I comprehend this correctly...

So proxmox won't attach more than 29 VirtioSCSI because it knows there is not enough ram?

For striped, I should use 4GB from each drive for each TB of storage pluse 1GB RAM for each TB on that stripe?

A striped drive of 10 x 14TB drives would be 4GB x 14 = 56GB on each drive = 560 GB stripe using 560GB RAM. Double that for a 1TB stripe.

Did I get that correct?

This has been very helpful!
Thank you.
 
Last edited:
So proxmox won't attach more than 29 VirtioSCSI because it knows there is not enough ram?
No, Proxmox or FreeNAS wouldn't limit your drive number if you got not enough RAM for ZFS. It just would be slow or wouldn'T run stable.
For striped, I should use 4GB from each drive for each TB of storage pluse 1GB RAM for each TB on that stripe?

A striped drive of 10 x 14TB drives would be 4GB x 14 = 56GB on each drive = 560 GB stripe using 560GB RAM. Double that for a 1TB stripe.

Did I get that correct?
No, ZFS needs to cache a lot. This is done in the ARC in RAM. The more storage you got, the bigger your ARC needs to be. But yes, if you use any kind of ZFS (and FreeNAS/TrueNAS uses ZFS only) and you want to use 504TB of HDDs you would want something like 500GB of RAM for the ARC (or basically as much as possible if you want to use all features like deduplication, because the deduplication table needs alot of RAM).
If you use 10x 14TB drives using ZFS the rule of thumb would be 144GB RAM (4GB + 10*14*1GB) without or 704GB RAM (4GB + 10*14*5GB) with deduplication.
And if you use any kind of software raid using the proxmox GUI it will also use ZFS for that.

If you never worked with ZFS before you should read this to get a brief understanding on how ZFS works.
 
  • Like
Reactions: Southeast
So the stripe size does not matter? What matters for RAM is the size of the HDD itself?
I only need a small portion of each one.
Thanks.
 
Ok, just rebuilt truenas VM with 8 HDD on stripe 32GB of each of the 14TB drives. 140GB RAM, 16 cores on the cpu.

Then setup a ubuntu desktop and mounted the nfs share. 32 cores on the cpu 96GB RAM

Ran a speed test from the ubuntu machine's mounted nfs.

140 to 596 MB/s write speed, which seems to match the internal network speed of OVH, I think the bottleneck is their network, can anyone else verify this as an OVH customer?


wget http://proof.ovh.net/files/10Gb.dat -O /dev/null
--2021-06-02 18:23:20-- http://proof.ovh.net/files/10Gb.dat
Resolving proof.ovh.net (proof.ovh.net)... 188.165.12.106, 2001:41d0:2:876a::1
Connecting to proof.ovh.net (proof.ovh.net)|188.165.12.106|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1250000000 (1.2G) [application/octet-stream]
Saving to: ‘/dev/null’

/dev/null 100%[===================>] 1.16G 281MB/s in 4.4s


Thanks in advance!
Awesome info.