[SOLVED] Making rpool mirrored if it was not already

"wipefs -a /dev/sdb3" or if error even "wipefs -af /dev/sdb3"
if ok again "zpool attach -f rpool /dev/sda3 /dev/sdb3"
 
There somethink unknown happened as zpool status shows Removal of vdev 1 copied ..., why removal as it was single and want it extend to a mirror of same size, maybe to much wrong cmd's before.
PS: part1+2 are just created but still empty without proxmox-boot-tool cmd described in https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapter_zfs and so if sda crash there's no booting possible from sdb even after zpool attach ... worked - don't to forget.
 
Your real pool is NOT a mirror with "attach" as it's a "stripe" with double size reached with "add", so there's no redundancy for booting.
In your testvm drws is what going on with your zpool ?? Maybe begin complete from beginning there ... :)
 
  • Like
Reactions: KAMALA
Here are my reall disks on the physical machine/ server:
1728731459162.png

1. If I remove the 2nd disk by it's ID
Code:
zpool remove rpool /dev/disk/by-id/
2. Format the disk
3. Attahc the disk with "zpool attach -f rpool /dev/sdi3 /dev/sdj3" "or I should again use by ID ...part3?"

"
sdi 8:128 0 931.5G 0 disk
├─sdi1 8:129 0 1007K 0 part
├─sdi2 8:130 0 1G 0 part
└─sdi3 8:131 0 930G 0 part
sdj 8:144 0 931.5G 0 disk
├─sdj1 8:145 0 1007K 0 part
├─sdj2 8:146 0 1G 0 part
└─sdj3 8:147 0 930G 0 part
"
it should be straight forward?
 

Attachments

  • 1728730672550.png
    1728730672550.png
    13.6 KB · Views: 2
Last edited:
Your real pool is NOT a mirror with "attach" as it's a "stripe" with double size reached with "add", so there's no redundancy for booting.
In your testvm drws is what going on with your zpool ?? Maybe begin complete from beginning there ... :)
I just posted my real server disks, could you please let me know then what is your recomendation, is that process will alow me to make the disks mirror or not?
 
Your real pool is NOT a mirror with "attach" as it's a "stripe" with double size reached with "add", so there's no redundancy for booting.
In your testvm drws is what going on with your zpool ?? Maybe begin complete from beginning there ... :)
how did you know it's a "strip"? so that I can share the same thing about on my server. however I know my rpool is not mirror and that is I opened this thread to know to make it mirror?
 
So maybe I understand wrong ... your real disks are passthrough to your vm drws or the rpool of dw-srv-p4 ??
Try what you have: /dev/sdj3
PS: I really cannot imagine why user take a zfs / zfs mirror for pve OS as filesystem as even zfs kills the ssd's/nvme's with it's pve db, With hw-raid1 there's no needed any cmd after exchanging disk, just take a look cmd that it's fine again - zfs is real for masochists.
...
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0

mirror-0 ONLINE 0 0 0​

ata-INTENSO_AA000000000000001553-part3 ONLINE 0 0 0
ata-INTENSO_AA000000000000006699-part3 ONLINE 0 0 0
...
 
So maybe I understand wrong ... your real disks are passthrough to your vm drws or the rpool of dw-srv-p4 ??
Try what you have: /dev/sdj3
PS: I really cannot imagine why user take a zfs / zfs mirror for pve OS as filesystem as even zfs kills the ssd's/nvme's with it's pve db, With hw-raid1 there's no needed any cmd after exchanging disk, just take a look cmd that it's fine again - zfs is real for masochists.
...
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0

mirror-0 ONLINE 0 0 0​

ata-INTENSO_AA000000000000001553-part3 ONLINE 0 0 0
ata-INTENSO_AA000000000000006699-part3 ONLINE 0 0 0
...
the real one is on dw-srv-p4 , but drws is the test VM I am doing test on it however they should exeactly the same if I am not mistake!

the only differnce one a VM and the other one is a physical server
 
Last edited:
Maybe the test is easier in vm with just truncate files as disks instead of passthrough real drives ...
But even you see you have no "mirror-0" line in your zpool status output ?
 
Maybe the test is easier in vm with just truncate files as disks instead of passthrough real drives ...
But even you see you have no "mirror-0" line in your zpool status output ?
what do you mean? which files do you want me to truncate?
 
cd /space/anywhere
truncate -s 10g vdisk1
truncate -s 10g vdisk2
zpool create tank mirror /space/anywhere/vdisk1 /space/anywhere/vdisk2
zpool status
pool: tank
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
/space/anywhere/vdisk1 ONLINE 0 0 0
/space/anywhere/vdisk2 ONLINE 0 0 0
 
Within your testvm you have first to cp the partion table before zpool attach rpool /dev/sda3 /dev/sdb3 !!
I just be sure about the "cp" I copied the full disk /dev/sda to /dev/sdb, is this correct or I should only cp /dev/sda3 to /dev/sdb3?
 
The "copy" of partions 1+2 and ready to boot from are done by proxmox-boot-tool, see Doku link before.
Partion 3 ist zfs and is done with the zpool cmd.
 
  • Like
Reactions: KAMALA
What is the output from zpool status rpool?
1728746037374.png
That was not in my two steps. Also, again, please use /dev/disk/by-id/... instead of /dev/sd....
I only see 1 disk by it's ID:
1728746220957.png
What exact command do you run and what exact error do you get?
this command the error message is different since I have wiped the disk within thi command "wipefs -af /dev/sdb3"
1728746300412.png

previuosly it said "attach" can't be done you need to correct it manaualy something like that.
 
cd /space/anywhere
truncate -s 10g vdisk1
truncate -s 10g vdisk2
zpool create tank mirror /space/anywhere/vdisk1 /space/anywhere/vdisk2
zpool status
pool: tank
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
/space/anywhere/vdisk1 ONLINE 0 0 0
/space/anywhere/vdisk2 ONLINE 0 0 0
That works fine, then what?

1728747059546.png
 
Looks like you are doing this is a VM? Is this for practice of did things go really work somewhere is this long thread? I'm assuming the former.
this command the error message is different since I have wiped the disk within thi command "wipefs -af /dev/sdb3"
View attachment 76153

previuosly it said "attach" can't be done you need to correct it manaualy something like that.
I guess you have to wait or maybe reboot until the vdev you removed in step 1 is really unused. I don't think it's a good idea to run wipefs while ZFS is still moving date from it to the rpool (sda3).
None of this is Proxmox specific and maybe there are ZFS guides about this on the internet that might help with the details of this?

EDIT: If this is indeed for practice; maybe start again without using wipefs.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!