[ZFS] Pool not showing in PVE

Romn

New Member
Mar 8, 2022
3
0
1
Christchurch, New Zealand
Hi,
I have three zfs pools on my PVE server:
Code:
~# zpool status
  pool: fast
 state: ONLINE
config:

    NAME                                                 STATE     READ WRITE CKSUM
    fast                                                 ONLINE       0     0     0
      ata-Samsung_SSD_860_EVO_M.2_500GB_S5GCNJ0N601716W  ONLINE       0     0     0

errors: No known data errors

  pool: rpool
 state: ONLINE
config:

    NAME                                                  STATE     READ WRITE CKSUM
    rpool                                                 ONLINE       0     0     0
      raidz1-0                                            ONLINE       0     0     0
        ata-KINGSTON_SA400S37240G_50026B77826471A2-part3  ONLINE       0     0     0
        ata-KINGSTON_SA400S37240G_50026B778258B54F-part3  ONLINE       0     0     0
        nvme-eui.0025385581b474d2-part3                   ONLINE       0     0     0

errors: No known data errors

  pool: slow
 state: ONLINE
config:

    NAME        STATE     READ WRITE CKSUM
    slow        ONLINE       0     0     0
      sdd       ONLINE       0     0     0

rpool was created at the installation.
fast was created in the GUI but I couldn´t create slow in the GUI as the disk was not appearing so I created with zpool create -f -o ashift=12 compression=lz4 slow /dev/sdd.

As you can see in this screenshot the slow pool appear in the list of ZFS pool but not in the list on the left, and if I want to add it to VM it doesn´t appear in the list.
Screenshot from 2022-07-15 21-55-13.png


My problem might be that (please confirm) I didn´t add the pool to PVE, especially in the wiki ZFS: Tips and Tricks they recommend to use:
Code:
pvesm add zfspool <storage-ID> -pool <pool-name>

And my question which sounds dumb but I haven´t found the answer with my research:: what is the storage id?
I tried with UUID pvesm add zfspool 15788445795718037614 -pool slow and the location pvesm add zfspool /dev/sdd -pool slow but none work.

Thank you.
 
I see alot of problems in general here:
1.) Your pools "fast" and "slow" are single disks, so there is no bit rot protection. When doing scrubs ZFS can tell you if data got corrupted or not but it can't do anything to repair it, as you don't got parity data.
2.) You mix SATA and NVMe in "rpool". So the additional money for the NVMe is basically wasted as the two SATA SSDs will cripple the speed of the NVMe SSD.
3.) You only got consumer SSDs and even worse...QLC consumer SSDs. In my opinion QLC SSDs should never be bought, because of the horrible performance and durability. They are not that much cheaper than TLC SSDs and you only get a fraction of the performance and life expectation. And in general enterprise grade SSDs are recommended for ZFS as ZFS got alot of overhead and might kill the SSDs within months. And because of the missing powerloss protection of consumer SSDs you can never be sure that you don't loose your complete pool on an power outage. And sync write performance will be very bad too and as ZFS does alot of sync writes, performance in general might be bad with high io delay slowing down the whole server.
4.) Your "rpool" is using raidz1 and you probably didn't increased your volblocksize from 8K to 16K. In that case VMs will waste alot of space because of padding overhead.
5.) I really would recommend to add disks by using the "/dev/disk/by-id/Yourdisk" and not "/dev/YourDisk". Makes it much easier to identify the disk you need to replace in case the disk will fail in the future. If your "slow" pools disk fails you just know it is disk "sdd" but that is not unique and "sdd" could be called "sde", "sdf" or whatever later. With your "fast" pool its clear. If zfs status tells you that "ata-Samsung_SSD_860_EVO_M.2_500GB_S5GCNJ0N601716W" failed you know its a Samsung SSD of model "860 EVO" with the serial "S5GCNJ0N601716W" and this is printed on the SSD, so its very easy to replace the right disk.

Now to your question:
The storage id is a name for your storage you can freely choose. Its what your storage will be called and you will need to use it everywhere later when doing something with your storages to tell PVE which storage you are referring to. So you can use a pvesm add zfspool WhatEverYouWant -pool slow
 
Last edited:
  • Like
Reactions: Romn
Thank you, for your answer.

So, as a first precision, this is just a homelab mainly to learn and have fun, no entreprise service at all.
1.) Your pools "fast" and "slow" are single disks, so there is no bit rot protection. When doing scrubs ZFS can tell you if data got corrupted or not but it can't do anything to repair it, as you don't got parity data.
Yes, I am aware of it and will only store data that I can lost on these (and I have an old syno NAS to backup things).
That is why I have the main pool in RAID-5 to insure a bit of protection (or service continuity) for what needs it there.

2.) You mix SATA and NVMe in "rpool". So the additional money for the NVMe is basically wasted as the two SATA SSDs will cripple the speed of the NVMe SSD.
I asked exactly this question on a forum and people told me the difference in performance will not be noticeable.
I was wondering how I should arrange the formatting and coupling of the drives, knowing that this is what I have (I also have a spare 500GB 2.5" SSD and a spare 1TB hdd but I don´t have any port left on the motherboard and no space left in the m-atx case).

3.) You only got consumer SSDs and even worse...QLC consumer SSDs. In my opinion QLC SSDs should never be bought, because of the horrible performance and durability. They are not that much cheaper than TLC SSDs and you only get a fraction of the performance and life expectation. And in general enterprise grade SSDs are recommended for ZFS as ZFS got alot of overhead and might kill the SSDs within months.
As I was saying this is only an amateurish setup and all those drives are second-hand things that I managed to salvage.
Although if it will kill the SSD that fast it is a bad thing.

What format and repartition of the disks would you recommend?
The rest of the config is:
- AMD Ryzen 5 3600
- 64GB RAM (4x16GB Crucial DDR4)
- GeForce GTX 1650
- B450m Steel Legend motherboard
- 1x additional 1Gbps ethernet PCIe card
- Silverstone SG-02F case
And the storage you saw: 2x 240GB 2.5" SATA SSD, 1x 250Gb NVMe M2, 1x 500Gb SATA M2, 1x 2TB HDD + 1x 480GB 2.5" SATA SSD.

I could also setup the two 500GB SSD as RAID1 and do something else with the NVMe and the leftover SATA port on the motherboard.
I chose ZFS because it sounds good now (snapshot, compression, etc...) but happy to stay with ext4 if it is better for SSD life. I have absolutely no idea on this (and the internet is saying everything and its opposite).

Main use as intended (I am just starting):
- pihole lxc (with a second pihole on a Raspberry Pi for redundancy with conf sync using gravity-sync)
- 1 Windows 10 VM with GPU passthrough for CAD Design (Fusion 360, Creo, etc...)
- 1 ubuntu-server VM as a docker-host to run several services (personal web projects, Jellyfin, etc...), as a vm rather than directly on the host for easiness to manage and backup

5.) I really would recommend to add disks by using the "/dev/disk/by-id/Yourdisk" and not "/dev/YourDisk". Makes it much easier to identify the disk you need to replace in case the disk will fail in the future. If your "slow" pools disk fails "you juts know its disk "sdd" but that is not unique and "sdd" could be called "sde", "sdf" or whatever later. With your "fast" pool its clear. If zfs status tells you that "ata-Samsung_SSD_860_EVO_M.2_500GB_S5GCNJ0N601716W" failed you know its a Samsung SSD of model "860 EVO" with the serial "S5GCNJ0N601716W" and this is printed on the SSD, so its very easy to replace the right disk.
Good suggestion! :thumbsup:

4.) Your "rpool" is using raidz1 and you probably didn't increased your volblocksize from 8K to 16K. In that case VMs will waste alot of space because of padding overhead.

I had no idea of this kind of option.
Reading you it seems I should restart my whole setup now.

The storage id is a name for your storage you can freely choose. Its what your storage will be called and you will need to use it everywhere later when doing something with your storages to tell PVE which storage you are referring to. So you can use a pvesm add zfspool WhatEverYouWant -pool slow

Hahahaha, I am so stupid!
 
Last edited:
I asked exactly this question on a forum and people told me the difference in performance will not be noticeable.
I was wondering how I should arrange the formatting and coupling of the drives, knowing that this is what I have (I also have a spare 500GB 2.5" SSD and a spare 1TB hdd but I don´t have any port left on the motherboard and no space left in the m-atx case).
With a raidz1 your IOPS performance will be only as fast as the single slowest disk. Your A400 are horrible slow as soon as the cache gets full. So your whole pool of 3 disks wouldn't be faster than a single A400 when it comes to IOPS performance. For throughput performance the performance will scale with the number of data bearing disks but the throughput performance still shouldn't be faster than 2x the throughput performance of a single A400.
What format and repartition of the disks would you recommend?
The rest of the config is:
- AMD Ryzen 5 3600
- 64GB RAM (4x16GB Crucial DDR4)
- GeForce GTX 1650
- B450m Steel Legend motherboard
- 1x additional 1Gbps ethernet PCIe card
- Silverstone SG-02F case
And the storage you saw: 2x 240GB 2.5" SATA SSD, 1x 250Gb NVMe M2, 1x 500Gb SATA M2, 1x 2TB HDD + 1x 480GB 2.5" SATA SSD.
I would just use single disks with LVM-Thin. Performance should be better, disks should survive longer and you get more RAM for your guests. But then you will miss all the nice ZFS features and you can never be sure that any of your data will be healthy as data might silently corrupt over time. and because the corruption is silent without you noticing it the backups won't help much as the backups will also only contain the same corrupted data.
ZFS is great, but if you want all the nice enterprise grade features of ZFS, you also should use enterprise grade hardware. Even the best filesystem can't guarantee data integrity, reliability and stability when running on unreliable hardware. Especially a UPS (so you won't loose unwritten async writes), Enterprise SSDs with powerloss protection (for better sync write performance and way better life expectation to compensate the big overhead/write amplification) and ECC RAM (so you can be sure data won't corrupt between CPU and ZFS disks).

There are alot of threads here where people complain about very high IO delay that makes the whole PVE server basically unusable and most of them where using HDDs or QLC SSDs, especially Kingston A400 or Samung QVOs. And they fixed it by getting rid of those crappy QLC consumer SSDs and replacing them with either TLC consumer SSD or even better enterprise TLC/MLC SSDs. See for example here:
https://forum.proxmox.com/threads/samsung-870-qvo-1tb-terrible-write-performance.82026/#post-472829
https://forum.proxmox.com/threads/proxmox-7-1-6-poor-performance.100632/post-434262
https://forum.proxmox.com/threads/high-io-on-load.105339/
https://forum.proxmox.com/threads/very-poor-performance-with-consumer-ssd.99104/
https://forum.proxmox.com/threads/poor-disk-performance.93370/
https://forum.proxmox.com/threads/abysmal-ceph-write-performance.84361/

I had no idea of this kind of option.
Reading you it seems I should restart my whole setup now.
Search this forum for "padding overhead". I explained that lot of times. Here is a good article written by the ZFS head developer that explains it down to the block level: https://www.delphix.com/blog/delphi...or-how-i-learned-stop-worrying-and-love-raidz

Edit:
If you are lucky your A400s might be old enough to be TLC. Looks like early ones were TLC and newer ones QLC.
 
Last edited:
  • Like
Reactions: Romn

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!