[SOLVED] Add new disks raid-1 with ssh

LGTMozart

New Member
Sep 3, 2023
13
0
1
Hi!

I have a PBS with two disks 1To raid-1 full (I didn't set a zfs quota :(, I will do next time...). Soon I should receive two disks 3To I want to set in raid-1 beside my two previous disks.
First question: can I add those new disks?

Because my PBS is full it is really difficult to me to access the PBS' GUI but I can access PBS with ssh.
Second question: can I add those new disks with command lines? Where can I find those commands? I'm sorry but I’m not qualified enough to do it without help.

When I add those new disks I will need to move some of my VM backups on this new storage. How could it be done? After I think I could start a Garbage Collection and have access again to the GUI.
Last question: if I move VM backups on this new storage, may I have to reconfigure my PVE or this will be automatic?

I read https://pbs.proxmox.com/docs/ but it is mainly for the GUI.
 
I have a PBS with two disks 1To raid-1 full (I didn't set a zfs quota :(, I will do next time...). Soon I should receive two disks 3To I want to set in raid-1 beside my two previous disks.
First question: can I add those new disks?
Old disks are a ZFS mirror (raid1)? Then yes, you could add two more disks to create a ZFS striped mirror (raid10).

Because my PBS is full it is really difficult to me to access the PBS' GUI but I can access PBS with ssh.
Second question: can I add those new disks with command lines? Where can I find those commands? I'm sorry but I’m not qualified enough to do it without help.
Yes, only possible via CLI. Command you are looking for is "zpool add": https://openzfs.github.io/openzfs-docs/man/master/8/zpool-add.8.html
Something like zpool add YourExistingPoolName mirror /dev/disk/by-id/yourFirstNewDisk /dev/disk/by-id/yourSecondNewDisk

When I add those new disks I will need to move some of my VM backups on this new storage. How could it be done? After I think I could start a Garbage Collection and have access again to the GUI.
Last question: if I move VM backups on this new storage, may I have to reconfigure my PVE or this will be automatic?
When converting your raid1 to raid10 this should extend your exising storage from 1 to 4 TB.
 
Old disks are a ZFS mirror (raid1)? Then yes, you could add two more disks to create a ZFS striped mirror (raid10).
Yes they are but I don't remember if the raid1 had been create by the HP proliant server or during the PBS configuration. I have to check.
If I create a striped mirror I hope this will not erase my datas

Yes, only possible via CLI. Command you are looking for is "zpool add": https://openzfs.github.io/openzfs-docs/man/master/8/zpool-add.8.html
Something like zpool add YourExistingPoolName mirror /dev/disk/by-id/yourFirstNewDisk /dev/disk/by-id/yourSecondNewDisk
Thank you for the url. I'm not sure to understand: I have to set the new disks in raid1 (mirror) before using this command or I just have to connect the disks and PBS will add them in my existing pool and create a raid10?
Again, I hope this operation will not erase my datas
 
I have to set the new disks in raid1 (mirror) before using this command or I just have to connect the disks and PBS will add them in my existing pool and create a raid10?
No. That command will add 2 new disks as a mirror and stripe it to the existing vdevs (so your existing mirror), resulting in a raid10.
 
I did like you said. It worked well!

I just run:
zpool list
to find my pool's name and use sdc and sdd instead of /dev/disk...

Thank you!
 
to find my pool's name and use sdc and sdd instead of /dev/disk...
sdc an sdd will work in this case but usually should be avoided. Makes it for example harder to replace a failed disk later because ZFS will then report that for example /dev/sdc is dead. But because it is dead you can't any longer run fdisk, lsblk or whatever to find out what disk model, serial or WWN this disk was. Would be so much easier if ZFS would report something like /dev/disk/by-id/sata-Samsung_Evo_870_1TB_SerialNumberOfTheDisk. Then you would directly know the manufacturer, model and serial so you can just search for the serial printed on the disk when replacing the disk.
 
sdc an sdd will work in this case but usually should be avoided. Makes it for example harder to replace a failed disk later because ZFS will then report that for example /dev/sdc is dead. But because it is dead you can't any longer run fdisk, lsblk or whatever to find out what disk model, serial or WWN this disk was. Would be so much easier if ZFS would report something like /dev/disk/by-id/sata-Samsung_Evo_870_1TB_SerialNumberOfTheDisk. Then you would directly know the manufacturer, model and serial so you can just search for the serial printed on the disk when replacing the disk.
To replace a disk didn't just I need to disconnect the failed disk then replace a new one in the same bay?
 
To replace a disk didn't just I need to disconnect the failed disk then replace a new one in the same bay?
No. You will need to define the old and new disks with the "zpool replace" command and in case you also boot from this disk you also need to clone the partition table and sync the bootloader first.
 
  • Like
Reactions: news and LGTMozart

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!