new ZFS - new disk space - how to add to install virtual machines

Jul 21, 2020
14
0
1
55
I have the proxmox system installed on a 64GB disk,
now I have installed 3 additional 4TB drives and made ZFS from them, how to add a new ZFS to the space where I can install virtual machines

root@debian-mt:/home/miroza# zpool status
pool: RAID
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
RAID ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
ata-WDC_WD40EFAX-68JH4N0_WD-WX32D20HKJSH ONLINE 0 0 0
ata-WDC_WD40EFAX-68JH4N0_WD-WX42D200YCNZ ONLINE 0 0 0
ata-WDC_WD40EFAX-68JH4N0_WD-WX22D10F1UPR ONLINE 0 0 0

errors: No known data errors
root@debian-mt:/home/miroza#
 
  • Like
Reactions: Miroza
In general, using mirror vdevs will give you the best IO performance and will not have that problem with unexpected parity data taking up space.
So a 4 disk RAID10 layout (2 mirror vdevs) would be okay for a start.

A general "will work for all" recommendation cannot be given. There are too many places in the design of the pool where you can tweak the performance (ZIL, special device, ...).

Looking closer at the disks in use I very much recommend that you get rid of those. These are SMR drives ( https://blog.westerndigital.com/wd-red-nas-drives/ ) which will be terribly slow and can cause the pool to fail because they are too slow and thus, it can happen that the kernel considers them as failed. If you haven't heard about all this (happened a few months ago) google "wd red smr".

Also, adding faster disks (higher RPM) will help the performance.
 
  • Like
Reactions: Miroza
In general, using mirror vdevs will give you the best IO performance and will not have that problem with unexpected parity data taking up space.
So a 4 disk RAID10 layout (2 mirror vdevs) would be okay for a start.

A general "will work for all" recommendation cannot be given. There are too many places in the design of the pool where you can tweak the performance (ZIL, special device, ...).

Looking closer at the disks in use I very much recommend that you get rid of those. These are SMR drives ( https://blog.westerndigital.com/wd-red-nas-drives/ ) which will be terribly slow and can cause the pool to fail because they are too slow and thus, it can happen that the kernel considers them as failed. If you haven't heard about all this (happened a few months ago) google "wd red smr".

Also, adding faster disks (higher RPM) will help the performance.
SMR drives
thanks for your help, my supplier agreed to exchange for another model,
but you still haven't told me what solution to give for 3 x 4TB for small company :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!