Homeserver migration

croccodillo

New Member
Oct 4, 2023
2
1
1
Hello all,

My name is Giovanni, I'm from Italy.
I'd like to upgrade my homeserver, from Debian to Proxmox VE (basically to switch my MDADM RAID to ZFS, and grant me more data resilience).
My actual configuration is:

- Debian 6.1.0
- Intel I3-6100 on ASUS MB (can't recall the exact model, now)
- 16 Gb RAM
- Single 256Gb NVME disk for system
- One MDADM RAID 1 set, composed by 2 6TB WD Red Plus HDD
- One MDADM RAID 1 set, composed by 2 4TB WD Red Plus HDD
- The two datasets above are then shared via SAMBA shares (since I have Windows, Android and Apple devices connected on the network)
- Few services running on top: minidlna server, transmission torrent server, navidrome. nextcloud.
- The server is headless, I control it via SSH and webmin.

I'd like, as told, to migrate my MDADM datasets to ZFS pools; in the meantime, I'd also like to move my services into VMs or containers, so I can learn few new things on virtualization (never did it before).

There's two possibilities: Install ZFS over the running Debian or install PVE from scratch, replacing the installed copy of Debian.
I prefer the second solution.

Well, I know my question have been asked a lot of time before, but I found itdifficult to find an answer that fit my situation so I will ask them again:

1. How to migrate the MDADM dataset to ZFS Pool, without losing my data?
Can I install PVE and then access my MDADM dataset (do I have to install MDADM into PVE), before to convert them to ZFS?
Or would it be better to install ZFS on the actual running Debian, convert the MDADM datasets into ZFS pools there, install PVE and import the previously created ZFS pools?
Can PVE detect ZFS pools made on another installation and import them?

I know I can remove one of the drives from each dataset (running it in a degraded mode), create a single drive ZFS pool with it and move data from the MDADM degraded array to the ZFS pool, and then add the second disk to the pool; do you have any detailed instruction/manual?

What about shares?
Do you prefer to share the ZFS pools directly from PVE (zfs share -o share.smb=on...) or do you prefer to bring up a container with a running copy of samba, and bind the ZFS pools to it?

How to create different containers/VMs tha can access the ZFS pools (transmission, minidlna, navidrome)?
I mean, can I bind the pools to different containers?

Or do I have to create a share and then make VMs to access such a share?

I know there are a lot of questions...

Thank you all in advance.

Regards,
Giovanni
 
  • Like
Reactions: ecogit
1. How to migrate the MDADM dataset to ZFS Pool, without losing my data?
You need a at least 6TB of additional storage so you can back up your mdadm arrays, destroy all data on them and create an empty ZFS mirror, and restore those files from that backup to the ZFS pool.

Can I install PVE and then access my MDADM dataset (do I have to install MDADM into PVE), before to convert them to ZFS?
Mdadm isn't officially supported by PVE but works fine out of the box when working with the CLI.

Or would it be better to install ZFS on the actual running Debian, convert the MDADM datasets into ZFS pools there, install PVE and import the previously created ZFS pools?
PVE7 needs Debian 11 and PVE8 needs Debian 12. I would install a fresh PVE8 from the PVE ISO. Then manually mount those mdadm arrays, back them up, destroy them, create ZFS mirrors and move the data back. There is no way to convert a mdadm to ZFS without temporarily moving all the data to a backup storage.

Can PVE detect ZFS pools made on another installation and import them?
Yes, as long as the pool weren't created with a OpenZFS version that is greater than the OpenZFS version that PVE is shipped with. See the "zpool import" command.

I know I can remove one of the drives from each dataset (running it in a degraded mode), create a single drive ZFS pool with it and move data from the MDADM degraded array to the ZFS pool, and then add the second disk to the pool; do you have any detailed instruction/manual?
Yes this should work, but I wouldn't do this without having a proper backup. If something goes wrong your data will be lost. You should have a backup of all your data anyway...raid is no replacement for a proper external backup.

What about shares?
Do you prefer to share the ZFS pools directly from PVE (zfs share -o share.smb=on...) or do you prefer to bring up a container with a running copy of samba, and bind the ZFS pools to it?
PVE doesn't come with any NAS functionalities. If you are experienced with managing users/shares/ZFS from the CLI then I would let the PVE host share the data. If not I would passthrough the 4 HDDs into a TrueNAS VM and create one 10TB striped mirror (raid10) with those 4 disks on the guestOS level. Bind-mounting the datasets from the PVE host into a LXC running something like webmin would work too but then you will be missing some of the ZFS features like shadow copy versioning when using SMB.

How to create different containers/VMs tha can access the ZFS pools (transmission, minidlna, navidrome)?
I mean, can I bind the pools to different containers?
Yes but only bind-mounting to containers, not to VMs. And with ZFS managed on the PVE host the containers and VMs won'T be able to use any ZFS features.

Or do I have to create a share and then make VMs to access such a share?
Yes, VMs can only access those datasets via SMB/NFS.
 
  • Like
Reactions: ecogit
Hello Dunuin,

Thank you for your answer.

Yes this should work, but I wouldn't do this without having a proper backup. If something goes wrong your data will be lost. You should have a backup of all your data anyway...raid is no replacement for a proper external backup.
You are right.
Of course I have a full backup; I have a second server (offsite, a Debian distro with ZFS installed, one single ZFS mirror pool with 2 disks) that spins up once a day and run an rsync with the first, main server. After that, it turns off.
So, there's no real risk involved into what I proposed.
The idea was to avoid to copy back all of my data (a lot) through the LAN from the second server; moving one disk into a ZFS pool and then copy from the old MDADM dataset should be faster, since I will use SATA speed.

ou need a at least 6TB of additional storage so you can back up your mdadm arrays, destroy all data on them and create an empty ZFS mirror, and restore those files from that backup to the ZFS pool.
This is a good idea, I will check if I can find a suitable HDD for cheap (I could use it as a offline USB backup, later).

Mdadm isn't officially supported by PVE but works fine out of the box when working with the CLI.

Do you mean I can install it via apt?

PVE7 needs Debian 11 and PVE8 needs Debian 12. I would install a fresh PVE8 from the PVE ISO. Then manually mount those mdadm arrays, back them up, destroy them, create ZFS mirrors and move the data back. There is no way to convert a mdadm to ZFS without temporarily moving all the data to a backup storage.
I will do it this way.

PVE doesn't come with any NAS functionalities. If you are experienced with managing users/shares/ZFS from the CLI then I would let the PVE host share the data. If not I would passthrough the 4 HDDs into a TrueNAS VM and create one 10TB striped mirror (raid10) with those 4 disks on the guestOS level. Bind-mounting the datasets from the PVE host into a LXC running something like webmin would work too but then you will be missing some of the ZFS features like shadow copy versioning when using SMB.
I'm quite experienced, I'm running my Linux server for 10 years now, it is in its third revision.
My main desktop laptop is a Linux one, too.
 
This is a good idea, I will check if I can find a suitable HDD for cheap (I could use it as a offline USB backup, later).
You are right.
Of course I have a full backup; I have a second server (offsite, a Debian distro with ZFS installed, one single ZFS mirror pool with 2 disks) that spins up once a day and run an rsync with the first, main server.
Then you could indeed break the raid arrays and go from mdadm raid1 to degraded mdadm + single disk ZFS to no mdadm + ZFS mirror (see the "zpool attach" command) without an additional disk.

Do you mean I can install it via apt?
It comes already installed if I remember right. If not it's easy to add via apt. I was running a mdadm mirror with PVE6 and PVE7. It's just not recommended to be used and neither the PVE installer nor the webUI will cover mdadm. But you can still manage it from the CLI as you would do with any Debian (as PVE is basically a Debian with additional packages and custom Ubuntu LTS kernel).

By the way... with both servers running ZFS I would use "zfs send | zfs recv" aka ZFS replication instead of rsync. Way faster because it can sync on block level and it will also sync the snapshots so versioning will be kept intact.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!