I've got access to some USFF desktops and want to use them as Poor-Mans-Blades or "Plates"
TL;DR at the End
Explanation why they are interresting:
some Comparison [Passmark/Watt]:
HPE Microserver Proliant
1-Liter Class USFF / 100-300€ price Range with below CPU's in use:
Limitation:
Therefore i was thinking which Disks i should put in them.
Drawback: No redundancy possible with mdadm/RAID1/ext4 or zfs/mirror.
I was thinking of mixing ext4 and zfs on the NVMe to have the fast NVMe drive available for guests.
Each "Plate" should run:
So ZFS could also build a rpool for the host OS to boot from it.
I'm unsure if i should stick to fdisk/lvm/ext4 which has served me well the last year's, where i now have my preferred file system layout. Or should i put everything into zfs with it's features.
adding a spare to my md/raid5, resizing/increasing my md/raid5, replace/rma a md/raid5 disk was no issue. with zfs it's like rethink at the beginning, change only possible with pool rebuild.
Are there any Pros/Cons for the root filesystem on ext4 compared to a zfs/rpool. Or vice versa?
For storage i really want zfs. My old hardware and mdadm raids are going to be converted to zraids accordingly.
For PBS i really see the benefits for incremental backups. Putting the whole LXC including it's mount points into the backup.
Regarding bitrot detection and healing, there are some .
BUT:
It seems there is a long lasting Bug with zfs putting its zfs_member tags on the whole disk instead of the partition only, confusing blkid
critical bug: zpool messes up partition table : zfs_member occupy the whole disk instead of being constrained to a partition
Is mixing ext4 and zfs on the same disk with different partitions safe? Has anyone done that before or is it just common sense and unwritten law always assign whole disks to zfs.
TL;DR:
TL;DR at the End
Explanation why they are interresting:
some Comparison [Passmark/Watt]:
HPE Microserver Proliant
- Gen7 N40L AMD Turion 602/25W
- Gen8 i3-3240 2295/55W
- Gen8 E3-1220Lv2 2452/17W
- Gen10 AMD Opteron X3421 3375/35W
- Gen10+ E-2224 @ 3.40GHz 7268/71W
1-Liter Class USFF / 100-300€ price Range with below CPU's in use:
- i5-3470T 2955/35W
- i5-4570T 3167/35W
- i3-7100T 3800/35W (ECC yes)
- i3-6300T 4029/35W
- i5-6500T 4758/35W
- i5-7500T 5272/35W
- i3-8100T 5303/35W (2x32GB DDR4-2400, ECC yes?)
- i7-6700T 7233/35W
- Intel vPro is often included for i5/i7
- incredible small
- stacks vertically like "Plates"
- price
Limitation:
- only 2 disks: 1x M.2/PCIe & 1x 2.5"
- no ECC RAM
- single Gbe
- refurbished/old
Therefore i was thinking which Disks i should put in them.
Drawback: No redundancy possible with mdadm/RAID1/ext4 or zfs/mirror.
I was thinking of mixing ext4 and zfs on the NVMe to have the fast NVMe drive available for guests.
Code:
HP Poor-Mans-Blade Setup
/dev/sda 500GB NVMe (WDS500G1R0C 55€)
/dev/sdb 1TB HDD (WD10JFCX 75€) or 1TB SSD (WDS100T1R0A 95€)
/dev/sda1 /boot ext4 1G Linux filesystem
/dev/sda2 pve lvm2 25G Linux LVM pv0
/dev/vg0 lvm2 Linux LVM vg0 (alloc 15G / free 10G)
/dev/mapper/pve-root / ext4 3G Linux LVM lv0 (is 1,1G)
/dev/mapper/pve-usr /usr ext4 4G Linux LVM lv1 (is 2,6G)
/dev/mapper/pve-var /var ext4 4G Linux LVM lv2 (is 1,7G)
/dev/mapper/pve-swap swap 4G Linux LVM lv3
/dev/sda5 475G ZFS vdev
/dev/sdb1 1000G ZFS vdev
zpool0 sda5 (no mirror)
zpool0 datasets "pve-data" /var/lib/vz/
zpool1 sdb1 (no mirror)
Datasets for LXC Clients on zpool0
PBS ZFS/zpool0 MountPoint as local Storage
GlusterFS Replication on ZFS/zpool0
GlusterFS Share on zpool1 as Storage to other PVE/PBS
DB Replication done via postgres, mariaDB on zpool1
Each "Plate" should run:
- PVE as Host
- PBS/LXC on Pool2
- LXC/GlusterFS Replication across the Plates for static WebServer Files, replicated HAPROXY Configs/Certs, etc ...
- GlusterFS Storage from the other Plate for PBS, putting a Backup on the local and GlusterFS Storage.
- MariaDB/PostgreSQL Slave Replication
So ZFS could also build a rpool for the host OS to boot from it.
I'm unsure if i should stick to fdisk/lvm/ext4 which has served me well the last year's, where i now have my preferred file system layout. Or should i put everything into zfs with it's features.
adding a spare to my md/raid5, resizing/increasing my md/raid5, replace/rma a md/raid5 disk was no issue. with zfs it's like rethink at the beginning, change only possible with pool rebuild.
Are there any Pros/Cons for the root filesystem on ext4 compared to a zfs/rpool. Or vice versa?
For storage i really want zfs. My old hardware and mdadm raids are going to be converted to zraids accordingly.
For PBS i really see the benefits for incremental backups. Putting the whole LXC including it's mount points into the backup.
Regarding bitrot detection and healing, there are some .
- Adding "copies=2" is also an option for a single-disk-zpool. But up to now, bit-rot wasn't an issue for me on my raid5's.
- Or splitting the disk into two partitions and build the zpool with two vdevs from the same disk.
- Or splitting the disk into three partitions and build the zpool with thee vdevs from the same disk as a raidz.
BUT:
It seems there is a long lasting Bug with zfs putting its zfs_member tags on the whole disk instead of the partition only, confusing blkid
critical bug: zpool messes up partition table : zfs_member occupy the whole disk instead of being constrained to a partition
- https://github.com/openzfs/zfs/issues/9105
- https://gitlab.gnome.org/GNOME/gparted/issues/14
- https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=888114
- https://marc.info/?l=util-linux-ng&m=156491424909123&w=2
Is mixing ext4 and zfs on the same disk with different partitions safe? Has anyone done that before or is it just common sense and unwritten law always assign whole disks to zfs.
TL;DR:
- mixing ext4 & zfs on a single disk is possible and safe?
- zfs/rpool or lvm/ext4 for PVE root file system, benefits and contras?