Clonning a filled ssd drive used as ZFS for a bigger one

Could you run zpool status ?
root@mainsrv:/dev# zpool status
pool: ssd960
state: ONLINE
scan: scrub repaired 0B in 00:41:56 with 0 errors on Sun May 14 01:05:57 2023
config:

NAME STATE READ WRITE CKSUM
ssd960 ONLINE 0 0 0
ata-KINGSTON_SA400S37960G_50026B778439F033 ONLINE 0 0 0

errors: No known data errors
 
And what is zpool import returning? Maybe there is an unimported pool also called "ssd960" where that EVO is already part of?
 
  • Like
Reactions: FoxtrotZulu
root@mainsrv:/dev# zpool import
no pools available to import
You can try to clear the 2TB drive (delete all partitions) and call directly to the zpool replace (without creating the partition table). It should create a single partition, for zfs only. As it's not a boot drive, it works well.
 
Last edited:
  • Like
Reactions: FoxtrotZulu
You can try to clear the 2TB drive (delete all partitions) and call directly to the zpool replace (without creating the partition table). It should create a single partition, for zfs only. As it's not a boot drive, it works well.
I finally have the 2tb disk as sdb without partitions.
can you help me with the correct
Code:
zpool replace
instruction so I don't F' things up?
Code:
root@mainsrv:/dev# lsblk
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                            8:0    0 894.3G  0 disk
├─sda1                         8:1    0 894.2G  0 part
└─sda2                         8:2    0     8M  0 part
sdb                            8:16   0   1.8T  0 disk
zd0                          230:0    0   832G  0 disk
└─zd0p1                      230:1    0   832G  0 part
nvme0n1                      259:0    0 476.9G  0 disk
├─nvme0n1p1                  259:1    0  1007K  0 part
├─nvme0n1p2                  259:2    0   512M  0 part /boot/efi
└─nvme0n1p3                  259:3    0 476.4G  0 part
  ├─pve-swap                 253:0    0     8G  0 lvm  [SWAP]
  ├─pve-root                 253:1    0    96G  0 lvm  /
  ├─pve-data_tmeta           253:2    0   3.6G  0 lvm
  │ └─pve-data-tpool         253:4    0 349.3G  0 lvm
  │   ├─pve-data             253:5    0 349.3G  1 lvm
  │   ├─pve-vm--100--disk--0 253:6    0   100G  0 lvm
  │   ├─pve-vm--200--disk--0 253:7    0    50G  0 lvm
  │   ├─pve-vm--300--disk--0 253:8    0   150G  0 lvm
  │   └─pve-vm--400--disk--0 253:9    0    32G  0 lvm
  └─pve-data_tdata           253:3    0 349.3G  0 lvm
    └─pve-data-tpool         253:4    0 349.3G  0 lvm
      ├─pve-data             253:5    0 349.3G  1 lvm
      ├─pve-vm--100--disk--0 253:6    0   100G  0 lvm
      ├─pve-vm--200--disk--0 253:7    0    50G  0 lvm
      ├─pve-vm--300--disk--0 253:8    0   150G  0 lvm
      └─pve-vm--400--disk--0 253:9    0    32G  0 lvm
 
Code:
root@mainsrv:/dev# zpool replace -f ssd960 ata-KINGSTON_SA400S37960G_50026B778439F033 ata-Samsung_SSD_870_EVO_2TB_S6PNNS0W213686T
invalid vdev specification
the following errors must be manually repaired:
/dev/disk/by-id/ata-Samsung_SSD_870_EVO_2TB_S6PNNS0W213686T-part1 is part of active pool 'ssd960'

no luck
 
Code:
root@mainsrv:/dev# zpool replace -f ssd960 ata-KINGSTON_SA400S37960G_50026B778439F033 ata-Samsung_SSD_870_EVO_2TB_S6PNNS0W213686T
invalid vdev specification
the following errors must be manually repaired:
/dev/disk/by-id/ata-Samsung_SSD_870_EVO_2TB_S6PNNS0W213686T-part1 is part of active pool 'ssd960'

no luck
Hmm I think you need to run a zpool labelclear first.

So, run it:

Bash:
zpool labelclear ata-Samsung_SSD_870_EVO_2TB_S6PNNS0W213686T-part1
zpool replace -f ssd960 ata-KINGSTON_SA400S37960G_50026B778439F033 ata-Samsung_SSD_870_EVO_2TB_S6PNNS0W213686T
 
never-ending story :-D

Code:
root@mainsrv:/dev# zpool labelclear ata-Samsung_SSD_870_EVO_2TB_S6PNNS0W213686T-part1
/dev/disk/by-id/ata-Samsung_SSD_870_EVO_2TB_S6PNNS0W213686T-part1 is a member (ACTIVE) of pool "ssd960"

root@mainsrv:/dev# zpool replace -f ssd960 ata-KINGSTON_SA400S37960G_50026B778439F033 ata-Samsung_SSD_870_EVO_2TB_S6PNNS0W213686T
invalid vdev specification
the following errors must be manually repaired:
/dev/disk/by-id/ata-Samsung_SSD_870_EVO_2TB_S6PNNS0W213686T-part1 is part of active pool 'ssd960'
 
never-ending story :-D

Code:
root@mainsrv:/dev# zpool labelclear ata-Samsung_SSD_870_EVO_2TB_S6PNNS0W213686T-part1
/dev/disk/by-id/ata-Samsung_SSD_870_EVO_2TB_S6PNNS0W213686T-part1 is a member (ACTIVE) of pool "ssd960"

root@mainsrv:/dev# zpool replace -f ssd960 ata-KINGSTON_SA400S37960G_50026B778439F033 ata-Samsung_SSD_870_EVO_2TB_S6PNNS0W213686T
invalid vdev specification
the following errors must be manually repaired:
/dev/disk/by-id/ata-Samsung_SSD_870_EVO_2TB_S6PNNS0W213686T-part1 is part of active pool 'ssd960'

I think now we need to call the experts :D I really don't have any idea what's happening :(
 
  • Like
Reactions: FoxtrotZulu
I think now we need to call the experts :D I really don't have any idea what's happening :(
Well, I wouldn't give up, so on the proxmox web admin, I look for the disks included and find that the 2TB was partitioned so I wiped it from the manager and then run the the
Code:
zpool replace -f ssd960 ata-KINGSTON_SA400S37960G_50026B778439F033 ata-Samsung_SSD_870_EVO_2TB_S6PNNS0W213686T
.

It seems it worked!
Code:
root@mainsrv:/dev# zpool status
  pool: ssd960
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Thu Jun  8 17:40:22 2023
        95.2G scanned at 10.6G/s, 696K issued at 77.3K/s, 823G total
        0B resilvered, 0.00% done, no estimated completion time
config:

        NAME                                             STATE     READ WRITE CKSUM
        ssd960                                           ONLINE       0     0     0
          replacing-0                                    ONLINE       0     0     0
            ata-KINGSTON_SA400S37960G_50026B778439F033   ONLINE       0     0     0
            ata-Samsung_SSD_870_EVO_2TB_S6PNNS0W213686T  ONLINE       0     0     0

errors: No known data errors

Now I will go back to your first explanation and continue from there if something arises.

I owe u a beer!
 
Well, I wouldn't give up, so on the proxmox web admin, I look for the disks included and find that the 2TB was partitioned so I wiped it from the manager and then run the the
Code:
zpool replace -f ssd960 ata-KINGSTON_SA400S37960G_50026B778439F033 ata-Samsung_SSD_870_EVO_2TB_S6PNNS0W213686T
.

It seems it worked!
Code:
root@mainsrv:/dev# zpool status
  pool: ssd960
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Thu Jun  8 17:40:22 2023
        95.2G scanned at 10.6G/s, 696K issued at 77.3K/s, 823G total
        0B resilvered, 0.00% done, no estimated completion time
config:

        NAME                                             STATE     READ WRITE CKSUM
        ssd960                                           ONLINE       0     0     0
          replacing-0                                    ONLINE       0     0     0
            ata-KINGSTON_SA400S37960G_50026B778439F033   ONLINE       0     0     0
            ata-Samsung_SSD_870_EVO_2TB_S6PNNS0W213686T  ONLINE       0     0     0

errors: No known data errors

Now I will go back to your first explanation and continue from there if something arises.

I owe u a beer!

Wow, that's it :)

I usually run watch zpool status -v to monitor the resilvering process :)
 
View attachment 51366

It is done?? not sure what to do next.

Yes, the resilvering process was done.

You can run zpool list to see the current size of your pool.
If it's 1TB only, you can follow my first message on this thread to extend it to 2TB :)
If you cant extend because of an 8MB partition between the ZFS partition and the free space, you can use something like gparted to move the 8MB partition to the end of the drive, like @Dunuin mentioned and then resize the ZFS partition, after that, you can resize the zpool :)
 
Yes, the resilvering process was done.

You can run zpool list to see the current size of your pool.
If it's 1TB only, you can follow my first message on this thread to extend it to 2TB :)
If you cant extend because of an 8MB partition between the ZFS partition and the free space, you can use something like gparted to move the 8MB partition to the end of the drive, like @Dunuin mentioned and then resize the ZFS partition, after that, you can resize the zpool :)
Great!!! that finally worked and I have a 1.8 TB

Code:
root@mainsrv:/dev# zpool list
NAME     SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
ssd960  1.81T   823G  1.01T        -         -    24%    44%  1.00x    ONLINE  -

Now I will try to figure out how to move all the contents on the old disk to the new one.

Thanks again GabrielLando!
 
Great!!! that finally worked and I have a 1.8 TB

Code:
root@mainsrv:/dev# zpool list
NAME     SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
ssd960  1.81T   823G  1.01T        -         -    24%    44%  1.00x    ONLINE  -

Now I will try to figure out how to move all the contents on the old disk to the new one.

Thanks again GabrielLando!

It should be already there :) when you did the zpool replace, it copied everything to the new drive. So all files should be in the new drive. If you take a look at the ALLOC, you will see you are using 823G.
 
  • Like
Reactions: FoxtrotZulu
No problem :) I'm glad you could fix your issue
Gabriel. I'm seeing a something weird now.

After booting one of the VMs (the one that uses this storage), I got a QEMU error, because I fissically remove the old 1TB disk. I manage to detach it on the webadmin and the VM booted ok. After one hour I found this:

1686333236310.png

indide the VM if I run:
Code:
fox@ubuntusrv01:/$ lsblk
NAME                      MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
loop0                       7:0    0  55.6M  1 loop /snap/core18/2745
loop1                       7:1    0  55.6M  1 loop /snap/core18/2751
loop2                       7:2    0  63.3M  1 loop /snap/core20/1879
loop3                       7:3    0  63.5M  1 loop /snap/core20/1891
loop4                       7:4    0 111.9M  1 loop /snap/lxd/24322
loop5                       7:5    0  53.2M  1 loop /snap/snapd/19122
loop6                       7:6    0  53.3M  1 loop /snap/snapd/19361
sda                         8:0    0   100G  0 disk
├─sda1                      8:1    0     1M  0 part
├─sda2                      8:2    0     2G  0 part /boot
└─sda3                      8:3    0    98G  0 part
  └─ubuntu--vg-ubuntu--lv 253:0    0    49G  0 lvm  /
sdb                         8:16   0   1.7T  0 disk
└─sdb1                      8:17   0   832G  0 part /media/datadrive
sr0                        11:0    1   1.8G  0 rom

What am I missing?

Also the data hasn't changed
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!