Hello everyone,
I started 2 years ago as an absolute beginner, but am still nowhere near being an IT professional, so please bear with me.
I could really use some help to repurpose my current hardware before making any new purchases.
It would be a shame, since it's rather new(although some was bought used). I'd do a lot differently if I#d be starting out again ;-).
current situation in my homelab:
My 1st PVE node(PVE1) was the start of my homelab 2 years ago:
Then my idea was to expand my homelab with TrueNAS Scale.
I made it a larger system in order to have some VMs and Container(Kubernetes) on it as well:
In essence I had to realize that TrueNAS Scale sucks as Hypervisor. Although I like the containerization mostly.
It runs my PBS as a VM but apart from that, nothing.
So I developed the idea to make it another PVE (PVE2) with a TrueNAS (Scale) virtualized in a VM:
1) Buy a PCIe HBA-Card to pass through the DATA-Pool-Disks, use the 2x 256GB SSD as ZFS-boot-Pool for Proxmox and buy a couple of SSDs for ZFS-VM-Storage.
2) Form a cluster with PVE1 and migrate all VMs over to PVE2.
3) Kill PVE1 set it up with local ZFS storage make it join the Cluster in order to enable ZFS-Replication for some or all VMs & LXCs.
4) Optional/Not thought through: Migrate PBS from TrueNAS VM to PVE1.
5) Success!
So you may have noticed a problem with step 3) in my plan.
First of all I have 2 different type of disks (2TB Nvme and 1TB SSD) on PVE1, so I would have to swap the SSD with a 2 TB one in order to form a ZFS pool for replication, right?
But is building a cluster node for ZFS-replication even possible with just 2 physical disks?
Wouldn't I need minimum 3 disks (1 boot, 2 ZFS-pool-members)?
If yes, is there any other clever way to build a 2 node-custer with replication in order to have something close to Live-Migration.
Worst case I will build a 2 node cluster without replication and offline migration only.
Thanks in advance for everyone who took the time to read through this!
I started 2 years ago as an absolute beginner, but am still nowhere near being an IT professional, so please bear with me.
I could really use some help to repurpose my current hardware before making any new purchases.
It would be a shame, since it's rather new(although some was bought used). I'd do a lot differently if I#d be starting out again ;-).
current situation in my homelab:
My 1st PVE node(PVE1) was the start of my homelab 2 years ago:
Code:
Mini PC (MSI Cubi) with
i7-10510U
32 GB non-ECC
1 NIC
1x 2TB Nvme // 1x 1 TB SSD both LVM
Then my idea was to expand my homelab with TrueNAS Scale.
I made it a larger system in order to have some VMs and Container(Kubernetes) on it as well:
Code:
Supermicro X11SCH-LN4F
XEON E-2144G
64 GB ECC
4 NIC // 1 IPMI
2x 256GB SSD Boot-Pool // 2x 10TB HDD Data-Pool (Mirror)
All on onboard-SATA
It runs my PBS as a VM but apart from that, nothing.
So I developed the idea to make it another PVE (PVE2) with a TrueNAS (Scale) virtualized in a VM:
1) Buy a PCIe HBA-Card to pass through the DATA-Pool-Disks, use the 2x 256GB SSD as ZFS-boot-Pool for Proxmox and buy a couple of SSDs for ZFS-VM-Storage.
2) Form a cluster with PVE1 and migrate all VMs over to PVE2.
3) Kill PVE1 set it up with local ZFS storage make it join the Cluster in order to enable ZFS-Replication for some or all VMs & LXCs.
4) Optional/Not thought through: Migrate PBS from TrueNAS VM to PVE1.
5) Success!
So you may have noticed a problem with step 3) in my plan.
First of all I have 2 different type of disks (2TB Nvme and 1TB SSD) on PVE1, so I would have to swap the SSD with a 2 TB one in order to form a ZFS pool for replication, right?
But is building a cluster node for ZFS-replication even possible with just 2 physical disks?
Wouldn't I need minimum 3 disks (1 boot, 2 ZFS-pool-members)?
If yes, is there any other clever way to build a 2 node-custer with replication in order to have something close to Live-Migration.
Worst case I will build a 2 node cluster without replication and offline migration only.
Thanks in advance for everyone who took the time to read through this!
Last edited: