Hi,
I've just been laid off as the company I worked for is closing permanently (thanks COVID) and as part settlement of some outstanding business expenses I've come home with some kit from the office, among which is a Dell R730XD server that I thought might make a nice home project to virtualise some of the home servers and other stuff I already have and to properly segregate IoT and things like that.
In a past life I'd have just merrily installed ESXi as it was free for this socket arrangement and there's a Dell customised version to download but I thought I'd look at other options, as it's not a business based decision this time around. I'd remembered looking at Proxmox ( and XCP/XCP-ng) a few years ago but we always ended up in the relative safety of VMware Inc but I'm determined to open my horizons this time as the flexibility appeals greatly. Despite reading and watching a myriad of videos and documents I confess to being a little lost as to how to configure the server post-install.
The server itself has 28 disks in total, handled by a PERC730P raid controller. The first 2 of which are SSD's currently set up as a RAID-1 mirror with a first stab at a Proxmox install on. The rest of the disks are identical 1.2Tb SAS drives. There seems to be lots of talk of using ZFS but having barely got to grasp with EXT4, I'd prefer to stick with something I know before throwing myself at the mercy of apparently flashing RAID controllers and using an entirely new file systems so I think I'd rather stick with good old fashioned hardware RAID. Normally, I'd just create a few virtual disks (RAID-1/5/10) depending on what they were going to be used for but I'm not clear if that's the right approach here.
Primarily, I'll likely be creating standalone virtual machines, maybe half a dozen or so. But I'd also like to have a play around with the built-in containerisation as I've never really done anything with it. From what I gather, ISO and LXC templates apparently need to be stored on a different disk(s) to VM's and/or containers? Should I be looking at doing something like a RAID1 virtual disk for the Proxmox hypervisor, a RAID1 virtual disk for the ISO's, a RAiD1 virtual disk for the templates, a RAID-10 virtual disk for the VM's and a separate RAiD-10 virtual disk for the containers? Or is that wrong, should I be sticking everything except Proxmox itself in a giant RAID-10 virtual disk. Or some combination of both? Help!
I'll undoubtedly run head long into similar problems when it comes to trying to implement teamed NIC's and VLAN's but I'll save that delight for another day.
Thanks for your patience.
Nigel
I've just been laid off as the company I worked for is closing permanently (thanks COVID) and as part settlement of some outstanding business expenses I've come home with some kit from the office, among which is a Dell R730XD server that I thought might make a nice home project to virtualise some of the home servers and other stuff I already have and to properly segregate IoT and things like that.
In a past life I'd have just merrily installed ESXi as it was free for this socket arrangement and there's a Dell customised version to download but I thought I'd look at other options, as it's not a business based decision this time around. I'd remembered looking at Proxmox ( and XCP/XCP-ng) a few years ago but we always ended up in the relative safety of VMware Inc but I'm determined to open my horizons this time as the flexibility appeals greatly. Despite reading and watching a myriad of videos and documents I confess to being a little lost as to how to configure the server post-install.
The server itself has 28 disks in total, handled by a PERC730P raid controller. The first 2 of which are SSD's currently set up as a RAID-1 mirror with a first stab at a Proxmox install on. The rest of the disks are identical 1.2Tb SAS drives. There seems to be lots of talk of using ZFS but having barely got to grasp with EXT4, I'd prefer to stick with something I know before throwing myself at the mercy of apparently flashing RAID controllers and using an entirely new file systems so I think I'd rather stick with good old fashioned hardware RAID. Normally, I'd just create a few virtual disks (RAID-1/5/10) depending on what they were going to be used for but I'm not clear if that's the right approach here.
Primarily, I'll likely be creating standalone virtual machines, maybe half a dozen or so. But I'd also like to have a play around with the built-in containerisation as I've never really done anything with it. From what I gather, ISO and LXC templates apparently need to be stored on a different disk(s) to VM's and/or containers? Should I be looking at doing something like a RAID1 virtual disk for the Proxmox hypervisor, a RAID1 virtual disk for the ISO's, a RAiD1 virtual disk for the templates, a RAID-10 virtual disk for the VM's and a separate RAiD-10 virtual disk for the containers? Or is that wrong, should I be sticking everything except Proxmox itself in a giant RAID-10 virtual disk. Or some combination of both? Help!
I'll undoubtedly run head long into similar problems when it comes to trying to implement teamed NIC's and VLAN's but I'll save that delight for another day.
Thanks for your patience.
Nigel