ZFS with SSDs: Am I asking for a headache in the near future?

Your problem is that your pool uses BIOS controlled device names (sda, sdb, .. etc). When you remove your SSD the remaining disks in your pool changes names. Eg SSD was called sda and when removed another disk becomes sda since disk naming always starts with sda. Problem is that ZFS uses static name assignment so it expects sda to always be the pools l2arc/log and not a normal pool member. See http://zfsonlinux.org/faq.html#WhatDevNamesShouldIUseWhenCreatingMyPool
 
yes, that's why i used the is when adding the log and cache. But the pve-installer just gives the option to do it by names not ids.

but - thanks for the link. there is the option to change it to ids.


BTW would be nice if the installer does it correct by default
 
Hi MasterTH,

a few days ago I had exactly this issue that my ZIL/L2ARC SSD crashed on my home Proxmox server. More info in this thread: ZFS boot mirror: ZIL/L2ARC device crashed.

You said:
did some more testing, after doing some writes on the pool the log-device gets in the UNAVAIL-State in this state i was able to remove it from the pool and reboot the server graceful.

How have you been able to do some writing on the pool? When the ZIL device is not there, the Proxmox machine does not boot up. Did you boot into debug mode of installer? I tried this but as you can read in my thread I was not able to import the pool. So I also was not able to remove the crashed device from the pool.
 
i did some further testings. did the installation with sda as the drive that stores the bootloader aso...
when the ssd is not avail. the same grub error appears.
 
when i booted up in debug mode of the pve-installer i was able to remove the log-partition and after this the system boots up.

is there another way to fix a broken ssd, without the proxmox installer? (i got a issue with the kvm keyboard inside the proxmox debug-mode installer (posted already here)).
 
This (removing the log device in pve installer debug mode) was the way that DID NOT work for me. To be able to remove the log device you must be able to import the pool in normal way (rw). But trying this, the import command hang up and nothing happend. The only way was to import the pool in read only mode. But in read only mode there is no way to remove the log device, so I got stuck.
For me the only way was to backup data from pool in read only mode and do a new installation of PVE. Now I am running my pool directly from SSD mirror without log device.
 
yes, that's why i used the is when adding the log and cache. But the pve-installer just gives the option to do it by names not ids.

but - thanks for the link. there is the option to change it to ids.

What did you do to change it for your boot drive, assuming your boot drive is zfs? The only thing I can see on that link is exporting and then imporing the pool:

Changing /dev/ names on an existing pool
Changing the /dev/ names on an existing pool can be done by simply exporting the pool and re-importing it with the -d option to specify which new names should be used. For example, to use the custom names in /dev/disk/by-vdev:

Code:
$ zpool export tank
$ zpool import -d /dev/disk/by-vdev tank
 
yes, that's why i used the is when adding the log and cache. But the pve-installer just gives the option to do it by names not ids.

but - thanks for the link. there is the option to change it to ids.


BTW would be nice if the installer does it correct by default

If you've created by names instead of id's can you not then break your mirror (detatch sdb3 for eg), then zpool attach rpool sda3 /dev/disk/by-id/ata-(id for your disk)-part3 and then once resilvered do that for the first disk as well?

Does that not cause an issue with rebooting? (ie does grub need updating, etc?)

PROTIP: you can install a single drive zfs raid 0 and then zfs attach to it later (this is how i add slightly different size drives together), so maybe easier to add 2nd drive to this as disk/by-id, then redo the first drive.
 
Last edited:
If you've created by names instead of id's can you not then break your mirror (detatch sdb3 for eg), then zpool attach rpool sda3 /dev/disk/by-id/ata-(id for your disk)-part3 and then once resilvered do that for the first disk as well?

Does that not cause an issue with rebooting? (ie does grub need updating, etc?)

you just need to export and import by ID once to change from sdX to ID. for rpools this is a bit tricky (because you cannot export it, and thus cannot import it), but can be done from a live cd. you should regenerate the initramfs afterwards to synchronize a potentially existing zpool.cache file, so you need to do the whole chroot dance. AFAIK grub does not need to be reinstalled or updated, it has its own primitive zfs/zpool handling.

PROTIP: you can install a single drive raid 1 mirror and then add to it later (this is how i add slightly different size drives together), so maybe easier to add 2nd drive to this as disk/by-id, then redo the first drive.

the installer should support slightly differently sized disks since 4.4
 
Yes, you can power down the machine and export and reimport them too. (My method breaks the mirror briefly and re-attaches instead.) Neither method should interfere with the ability to (re)boot the system of course.

BTW, Here's a linux live CD with ZFS support:

https://people.debian.org/~jgoerzen/rescue-zfs/
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!