Hello beautiful people,
I hope you're doing well.
I tried to use my additional free time these days in a useful manner and started playing with a new Proxmox installation and the ZFS filesystem.
I wanted to install basic Proxmox VE system with zfs-mirroring on two drives. After that simulate a drive failure by unplugging one of the disks. Replacing the "failing" drive with a new good one. Syncing the mirror and replace the old drive in the pool with the new one.
I installed Proxmox onto a 500gb hard drive and a 500gb SSD.
This worked as expected. It shows up in the GUI as well as console.
I then unplugged the hard disk /dev/sda and shortly after the pool showed up as degraded with /dev/sda missing.
My server does not support hotplug so I had to shut it down, replace the disk and fire it back up again.
I inserted a blank new ssd and the server did boot up again. Which was quite nice.
In ZFS:_Tips_and_Tricks#Replacing_a_failed_disk_in_the_root_pool it says it
In the GUI the old disk is shown as completely missing. And on the console you see a new /dev/sda is there, however apart from that nothing has been done to it yet.
This the part where I'm honestly get a bit confused.
In ZFS:_Tips_and_Tricks#Replacing_a_failed_disk_in_the_root_pool the example is for RAIDZ-1 (RAID5) szenario. I don't have that. I have mirroring.
Additionaly it says to install grub. But in ZFS_on_Linux#_bootloader it says when EFI is used instead of Legacy BIOS, Proxmox uses systemd-boot instead of grub. My server is currently configured with UEFI instead of LegacyBIOS.
Therefore the steps mentioned in ZFS_on_Linux#_zfs_administration Changing a failed device should apply to me, right?
If so, I have the following question.
What exactly is 'old device'? Is it /dev/disk/by-id/ata-ST9500420ASG_SERiAL-part3 ? Or is it the number 8350069619613282498 ?
What is new device? Is it /dev/sda ? Is it the new /dev/disk/by-id/ata-Samsung_SSD_860_EVO_500GB_SERiAL ?
I'm confused. What is the logic here?
Thank you in advance
halb9
I hope you're doing well.
I tried to use my additional free time these days in a useful manner and started playing with a new Proxmox installation and the ZFS filesystem.
I wanted to install basic Proxmox VE system with zfs-mirroring on two drives. After that simulate a drive failure by unplugging one of the disks. Replacing the "failing" drive with a new good one. Syncing the mirror and replace the old drive in the pool with the new one.
I installed Proxmox onto a 500gb hard drive and a 500gb SSD.
This worked as expected. It shows up in the GUI as well as console.
I then unplugged the hard disk /dev/sda and shortly after the pool showed up as degraded with /dev/sda missing.
My server does not support hotplug so I had to shut it down, replace the disk and fire it back up again.
I inserted a blank new ssd and the server did boot up again. Which was quite nice.
In ZFS:_Tips_and_Tricks#Replacing_a_failed_disk_in_the_root_pool it says it
So i was a bit worried here. But lucky me, this worked without any problems.could be interesting if it's /dev/sda that's failing.
In the GUI the old disk is shown as completely missing. And on the console you see a new /dev/sda is there, however apart from that nothing has been done to it yet.
This the part where I'm honestly get a bit confused.
In ZFS:_Tips_and_Tricks#Replacing_a_failed_disk_in_the_root_pool the example is for RAIDZ-1 (RAID5) szenario. I don't have that. I have mirroring.
Additionaly it says to install grub. But in ZFS_on_Linux#_bootloader it says when EFI is used instead of Legacy BIOS, Proxmox uses systemd-boot instead of grub. My server is currently configured with UEFI instead of LegacyBIOS.
Therefore the steps mentioned in ZFS_on_Linux#_zfs_administration Changing a failed device should apply to me, right?
If so, I have the following question.
Code:
# zpool replace -f rpool <old device> <new device>
What is new device? Is it /dev/sda ? Is it the new /dev/disk/by-id/ata-Samsung_SSD_860_EVO_500GB_SERiAL ?
I'm confused. What is the logic here?
Thank you in advance
halb9