RAIDz1 block size to maximize usable space?

just for curiosity as to how this ends up,
also report back in a year on how it performs (fragmentation).
The SSDs as special vdevs really could help here, since even Proxmox recommends this for Proxmox Backup Server.

Although I am still not sure if OP really is using Proxmox VE as a "backup" instead of Proxmox backup server.
 
Just use RAIDz1, ashift=12, volblocksize=64k which will use around 11'2% overhead, resulting in ~17'2TiB usable (9x2,4TB=21,6TB - 2,4TB. Please keep in mind TB to TiB conversion). How much will the VM use on disk depends on how compressible the data is. Being backups, they are probably compressed already, so I would not expect much gains here. I think this will work ok.
More like 15.36 TB or 13.97 TiB if you keep the "don't fill a ZFS pool more than 80%" rule in mind. So the 14.1 TiB virtual disk should fit but shouldn't grow much more.
 
Please, just for curiosity as to how this ends up, give the output of zpool list -v once the migration is done to get the used space in each disk. Thanks!

Code:
# zpool list -v
NAME                                        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
chs-vm01-datastore01                       19.6T  15.6T  4.01T        -         -     0%    79%  1.00x    ONLINE  -
  raidz1-0                                 19.6T  15.6T  4.01T        -         -     0%  79.6%      -    ONLINE
    scsi-35000c500b84dce17                 2.18T      -      -        -         -      -      -      -    ONLINE
    scsi-35000039988333e45                 2.18T      -      -        -         -      -      -      -    ONLINE
    scsi-35000039988333cb5                 2.18T      -      -        -         -      -      -      -    ONLINE
    scsi-35000039988333c15                 2.18T      -      -        -         -      -      -      -    ONLINE
    scsi-35000039988333229                 2.18T      -      -        -         -      -      -      -    ONLINE
    scsi-35000039988331a15                 2.18T      -      -        -         -      -      -      -    ONLINE
    scsi-350000399883316d9                 2.18T      -      -        -         -      -      -      -    ONLINE
    scsi-3500003998832dca9                 2.18T      -      -        -         -      -      -      -    ONLINE
    scsi-3500003998832d8fd                 2.18T      -      -        -         -      -      -      -    ONLINE
 
  • Like
Reactions: VictorSTS
I would run a fstrim (linux) or Optimize-Volume -Retrim (Windows) so unused sectors of the disk are trimmed from the storage in order to potentially recover some space. You would need to make sure discard and ssd emulation are ticked for the disk, that you are using VirtIO SCSI single as the controller and that the disk is attached to SCSI (not IDE, SATA, vmscsi) for the trim to work.
 
I would run a fstrim (linux) or Optimize-Volume -Retrim (Windows) so unused sectors of the disk are trimmed from the storage in order to potentially recover some space. You would need to make sure discard and ssd emulation are ticked for the disk, that you are using VirtIO SCSI single as the controller and that the disk is attached to SCSI (not IDE, SATA, vmscsi) for the trim to work.

Is there any risk of data loss with this method? My disk is attached as scsi0 with a VirtIO SCSI single controller. I'd have to shut the VM down and tick off discard and SSD emulation and then boot the VM back into Windows to run these commands.
 
That is the beauty of ZFS ;)

Again, I am not sure what your config is, what your VM(s) are, what is in them and so on.
Seems like we have to worm everything out of you. But ZFS doesn't come for free.

As far as I understand it, you have one single 14TB Windows VM that you move from VMware to Proxmox.
I don't know why you would call that a backup, but ok.
So you now decided to go with 64k, which is perfect for your pool geometry when it comes to storage efficiency!
But every write that is smaller than 64k you will suffer from io amplification and fragmentation.
Since you are already at 80% full, your node is already on the edge. If you only get slightly more data or fragmentation, performance will absolutely tank.
 
Last edited:
That is the beauty of ZFS ;)

Again, I am not sure what your config is, what your VM(s) are, what is in them and so on.
Seems like we have to worm everything out of you. But ZFS doesn't come for free.

As far as I understand it, you have one single 14TB Windows VM that you move from VMware to Proxmox.
I don't know why you would call that a backup, but ok.
So you now decided to go with 64k, which is perfect for your pool geometry when it comes to storage efficiency!
But every write that is smaller than 64k you will suffer from io amplification and fragmentation.
Since you are already at 80% full, your node is already on the edge. If you only get slightly more data or fragmentation, performance will absolutely tank.

I call it a backup because it's a File Backup server that gets file backups replicated to it using the software Vembu. There is a main file backup server running Vembu and then this one that the data gets replicated to.

I'm well aware of the potential performance pitfalls but I don't have a choice in this case.
 
tick off discard and SSD emulation and then boot the VM back into Windows to run these commands.
Those options must be ticked on.

Used Optimize-Volume -DriveLetter C -ReTrim -Verbose like a thousand times without issues. It only touches free sectors of the virtual disk and issues a trim/discard. This is useful if the space used in the VM is less than the 14TB of the drive itself.

It really is not. At least on ZFS. Takes literally less than one second and uses 0 space.
BUT! The modified data writen since the snapshot will use space and may fill the storage.

But every write that is smaller than 64k you will suffer from io amplification and fragmentation.
That will happen almost any time with RAIDz, unfortunately.
 
Those options must be ticked on.

Used Optimize-Volume -DriveLetter C -ReTrim -Verbose like a thousand times without issues. It only touches free sectors of the virtual disk and issues a trim/discard. This is useful if the space used in the VM is less than the 14TB of the drive itself.

Would i set discard and SSD emulation back to off after running the Optimize-Volume?
 
Leave it on so deletes will send a discard to the storage and your volumes will stay thin provisioned.

Uh... I just rememberd this: is "Thin provisioning" ticked at Datacenter -> Storage -> "yourZFSstorage"?
 
Leave it on so deletes will send a discard to the storage and your volumes will stay thin provisioned.

Uh... I just rememberd this: is "Thin provisioning" ticked at Datacenter -> Storage -> "yourZFSstorage"?

Yes.

Also, the Opitimize-Volume command works on C but most of my data (12TB+) is on D. And the command fails on D with...

> Optimize-Volume -DriveLetter D -ReTrim -Verbose
VERBOSE: Invoking retrim on Data (D:)...
VERBOSE: Performing pass 1:
VERBOSE: Retrim: 0% complete...
Optimize-Volume : The operation failed with return code 40002
Activity ID: {43797629-b7d1-42f8-a808-75104522291a}
At line:1 char:1
+ Optimize-Volume -DriveLetter D -ReTrim -Verbose
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (StorageWMI:ROOT/Microsoft/...age/MSFT_Volume) [Optimize-Volume], CimException
+ FullyQualifiedErrorId : StorageWMI 40002,Optimize-Volume
 
Last edited:
I call it a backup because it's a File Backup server that gets file backups replicated to it using the software Vembu.
I don't know about Vembu, but some backup providers allow you to backup to S3 or NFS and so on.
That can be an advantage over block storage.
BUT! The modified data writen since the snapshot will use space and may fill the storage.
Course, that is how snapshots work
That will happen almost any time with RAIDz, unfortunately.
You can create a volblocksize of 16k and get less fragmentation but only 66% storage efficiency.

That is why some people (proxmox manual and me) say to ditch RAID for blockstorage. It almost never works, unless you have SSDs that can handle the io amplification and you don't care about having slow SSDs and more TBW.
Would i set discard and SSD emulation back to off after running the Optimize-Volume?
I would set and leave discard to on. Windows should detect that with the right virtio driver and should be able to "clean" up the thin provisioned storage. Unless needed, I would not set SSD emulation to on, since the underlying storage is not SSDs.
 
Last edited:
Yes.

Also, the Opitimize-Volume command works on C but most of my data (12TB+) is on D. And the command fails on D with...

> Optimize-Volume -DriveLetter D -ReTrim -Verbose
VERBOSE: Invoking retrim on Data (D:)...
VERBOSE: Performing pass 1:
VERBOSE: Retrim: 0% complete...
Optimize-Volume : The operation failed with return code 40002
Activity ID: {43797629-b7d1-42f8-a808-75104522291a}
At line:1 char:1
+ Optimize-Volume -DriveLetter D -ReTrim -Verbose
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (StorageWMI:ROOT/Microsoft/...age/MSFT_Volume) [Optimize-Volume], CimException
+ FullyQualifiedErrorId : StorageWMI 40002,Optimize-Volume
Hey @iamspartacus

just came across this thread and am seeing similar issues.

did you ever work out a solution to get around the 40002 error?

would love to hear more about your experience.

""Cheers
G
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!