[SOLVED] Need help reconfiguring proxmox zfs storage drive.

jim.bond.9862

Renowned Member
Apr 17, 2015
395
34
68
Hello everyone, I have been rebuilding my home server to the latest proxmox 7.4 on zfs
While everything seems to progress ok , I made a little mistake when setting up the local storage I wanted to use zfs pool for it, but I made a mistake setting up the zfs pool to use for zfs local storage. I wanted to have a mirror pool but just setup a 2 disk pool instead. Now I almost had a disaster when one of the disks begun to fail. I recovered the pool and replaced the disk but also discovered the mistake.
How can I fix this in a non distractive way?
Original pool was on 2 4TB disks. And is now an 8tb pool
But I only use 2tb now.

I have 2 more 4 tb disks.
How do I move all of the data from my current zfs-local pool onto new mirror pool I can create and than replace the original with the new one?
 
Last edited:
You could use "zpool remove" to turn that 2 disk stripe into a single disk pool, then clone the partition table and copy the bootloader similar like described at "replace a failed bootable device" here and then use "zpool attach" with the 3rd partition to turn that single disk pool into a mirror.
If you want to be sure you can't screw something up, you could use "dd" or clonezilla to backup those 2 disks in use to the 2 unused disks first.
 
You could use "zpool remove" to turn that 2 disk stripe into a single disk pool, then clone the partition table and copy the bootloader similar like described at "replace a failed bootable device" here and then use "zpool attach" with the 3rd partition to turn that single disk pool into a mirror.
If you want to be sure you can't screw something up, you could use "dd" or clonezilla to backup those 2 disks in use to the 2 unused disks first.
Ohh, I am sorry, let me clarify.
I am not talking about the system drive.

My setup consists of
2-240gb SSD for system disk. Those are fine. They are in mirror setup and work.
I also have a zfs pool I use for local data. I don't use local directory, I use the pool in question for all local system use. Backups, ISO, template and containers volumes.
That is the pool I need to fix.
It is a regular data zpool. No boot or anything.
I also use several local pools to store my data. I mount my pools into containers to manage my data.
 
Last edited:
Interesting, can I go different way through?
Like my current pool is named pvelocal.
Can I create new pool with my 2 other 4tb disk call it pvelocalnew.
Send/receive data from old pool to new pool. I only have 2tb on it.
Than export both pools and reimport new pool as pvelocal.
Will proxmox pickup the new pool as old and recognize it as valid zfs-local device?

I am not sure I want to have an 8tb local store right now. I rather have 2 spare disks.
 
Hi Dunuin, you are right, I don't understand you. Sorry.

I read through the links and I see that you can possibly convert raid0 to raid 1 in single disk pool or raid0 to raid10 in multiple disk pool. But I don't see how to convert raid10 to raid10 without destroying the pool, this loosing data.

My pool setup now is

_--------------_-------------------
root@atlas:~# zpool status PveLocal pool: PveLocal
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
scan: scrub repaired 0B in 01:48:34 with 1 errors on Wed Apr 12 23:21:12 2023
config:

NAME STATE READ WRITE CKSUM
PveLocal ONLINE 0 0 0
ata-WDC_WD40EZAZ-00SF3B0_WD-WX42DB1H9VU8 ONLINE 0 0 4
ata-WDC_WD40EZAZ-00SF3B0_WD-WX22D411CREH ONLINE 0 0 0

errors: 1 data errors, use '-v' for a list
root@atlas:~#

_---------
See raid0 (stupid, stupid, stupid)

I have 2 more disks 4tb each that I can use.

Now based on what I read in the Links you provided is that I can attach those extra disks to this pool and somehow convert this into raid10. But zfs docs says you can not decrease the size of the pool without rebuilding it. So removing disks not work?
Or am I missing something?
At the end I want to end up with a mirror pool of 4tb.
 
But zfs docs says you can not decrease the size of the pool without rebuilding it. So removing disks not work?

zpool remove [-npw] pool device…
Removes the specified device from the pool. This command supports removing hot spare, cache, log, and both mirrored and non-redundant primary top-level vdevs, including dedup and special vdevs.
So you can remove single disk or mirrored vdevs, as long as the criterias are met. So enough empty space to move all data around, not raidz1/2/3 used, all data structures unlocked in case encryption is used, all the same sectorsize and so on.
But not sure if it could be problematic that your data is already damaged.

Otherwise try a "zfs send | zfs recv" to move data between two pools:
https://docs.oracle.com/cd/E18752_01/html/819-5461/gbchx.html
 
Last edited:
  • Like
Reactions: jim.bond.9862
So you can remove single disk or mirrored vdevs, as long as the criterias are met. So enough empty space to move all data around, not raidz1/2/3 used, all data structures unlocked in case encryption is used, all the same sectorsize and so on.
But not sure if it could be problematic that your data is already damaged.

Otherwise try a "zfs send | zfs recv" to move data between two pools:
https://docs.oracle.com/cd/E18752_01/html/819-5461/gbchx.html
Thanks, I believe I can delete the damaged files. It looks like an old movie backup folder. I was copying some data from older pool when shtf so it is most likely some old damaged movie file that I can safely delete. I also have an external drive I can try and copy the whole backup folder to and just dum the whole folder.
Will see.
 
An update on this issue.
I have been reading through the links kindly provided by Dunium, and I figured that my knowledge of zfs capabilities is very outdated apparently now we have ability to not only add and expand the mirror pool as it was a few years back, but we can remove vdevs and shrink the mirror raid as well.
Good to know.
 
Want to say thanks for all the help.
I ended up converting the 2 disk raid0 to 4 disk raid10 on the pool. So far all is working.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!