Setting up a new install as a newbie

KeyzerSuze

Member
Aug 16, 2024
37
1
8
Hi

I have 3 dell server with Perc cards. with Sata + SAS drives.

1) I want a backup server as well - can i use one of these nodes as a backup server - or should i get another device

I have setup these up as a 3 node cluster, I went to replicate a LXC and i can't because i am using lvm / lvm-thin. Seems like i need Ceph or ZFS.
Not interesting in CEPH at the moment - too many new things at once.

So how to convert to ZFS... my main issue here I can rebuild - not much done - but I am using perc raided storage and it looks like I am wasting a lot of space. plus you can't run ... shouldn't run zfs on hardware raid.

one of my servers is
4 x 1.8T - currently in raid6 - boot device
3 x 3.7T - configured
- 2 x 3.7T as raid1
- 1 x 3.7T as non raid - so a pass through - when i used the proxmox UI I can see smart info - that thats the indicator for using ZFS or not

so i have
sdc -> raid6 - it was the first built device - but comes in as sdc ... its boot
sdb -> raid1 - built it second
sda -> non raid i can see smart info , built it last


I'm thinking I should ...... rebuild
I have 2 900G SSD - i have to get a 2.5 to 3.5 convertor to put them in the server
make 2 x 900G as HW raid 1 - make this my boot
then make everything else non-raid and create zfs pool on top
thinking
zpool skinny
1x 1.8T + 1x 1.8T = as raid 0
1x 1.8T + 1x 1.8T = as raid 0
and then mirror the 2 above
or add them into a raidz1 - allow me to lose 1 disk

zpool fat
3 x 3.7T into a raidz(1) - again allow me to lose 1 disk


one i have this I can then do replication.


I'm thinking - the reason i want replication is to make a LXC - turnkey nginx highly available on the cluster - basically its the reverse proxy into the proxmox cluster. don't care which node it runs on - just want it available

I have set it as a cluster resource that needs to be running but I'm guessing without replication thats not going to work.




Final question / comment. I presume i could also go the ceph way, not sure ZFS and just make 3 nodes of ceph and carve out space from there


EDIT:-

did some tests hdparam -Tt on the non raid disk and only get around 15MB/s compared to 115 with a raid1 devices - thats going to make ZFS really bad/slow ...
 
Last edited:
I think thats my question how would people set this up.

My initial test of pass through on the through put was pretty dismal 15Mb/s where are a raid0 it get 110Mb+


EDIT:

I did some testing again -- seems like I can get 400Mb/s on the pass through - I had a back ground rebuild going - which might have affected the original testing

But my underlying question is still the same
hw raid
hw raid + ceph
zfs
or do i build a NAS

I'm thinking complete rebuilds remove the hw completely and just use zfs ..
 
Last edited: