First off Specs -
CPU - Intel 5820K (Haswell-E)
RAM - 64GB
Motherboard - Asus X99 Deluxe (1st edition)
Video Card 1 - Nvidia Quadro FX 580 (PCI Express Slot 1)
Video Card 2 - Sapphire Nitro R9 380 (PCI Express Slot 3)
Keyboard 1 - Logitech K400 (KB w/ Trackpad built in)
Keyboard 2 - Corsair K70
Mouse - Logitech G502
Several Monitors...
9x 128GB SSDs 1x for Proxmox/Isos 8x for ZFS Raid 0 (Yes.. I know...)
1x 240GB SSD (Running Windows 10 as a fallback drive disabled in the BIOS)
Bios Configuration -
CSM - Auto (all subsections set to EFI first)
Intel VT-X - Enabled
Intel VD-D - Enabled
ACS Control - Enabled (I don't know if this is required but...)
Proxmox VE version: 4.3
----------------- Possibly skip this section -----------------
This is my production machine so I need to have a good fallback in case this takes a whole weekend...
The idea is to disable specific drives in the BIOS so that I can flip back and fourth without Proxmox toching Windows and without Windows touching Proxmox. (think disaster recovery or if my fancy ZFS pool craps the bed)
Proxmox will be installed to and run from the Samsung drive connected to SATA Port 1
A ZFS pool will be created to run on the drives running on SATA Ports 2-9
Windows will run from the Intel Drive connected to SATA Port 10
To set this up correctly I did the following:
Installed all of my SSDs taking note of where everything is plugged in. I know my Intel SSD is plugged into SATA10. I went into the BIOS and disabled SATA Ports 1-9. I then booted off my trusty Windows 10 USB key and did a standard install of Windows 10...
Once Windows was installed and running again...
Back to the BIOS!
Re-Enable SATA Drives 1-9 (Note at this point all drives are enabled)
Boot back into Windows
Bonus because I'm lazy... All my SSD's had data on them and from experience its just easier for me to wipe them using Diskpart in Windows... You can skip this part if your drives are already blank...
Use Diskpart to wipe all of the Samsung SSDs
Diskpart (from an admin command prompt)
List Disk
(note the disk numbers of your non Intel Drives)
For each drive:
Select Disk X
Clean
Repeat until all of the Samsung drives are wiped
DONT Skip this Step: go to Device Manager and DISABLE all of the Samsung Drives. (My logic here is that I dont want windows to even think about touching my ZFS pool.)
----------------- Install Proxmox -----------------
Install Proxmox:
So let me start off with a disclaimer... I suck at working with Linux...
This is probably me being anal here but it is important to me (especially with so many drives) to know exactly which drive(s) contain operating systems, because of this (and because this burned me previously) I go into the BIOS and disable ALL drives except the drive I am installing an operating system on. In this case we're installing Proxmox on SATA Port 1
Go back into the BIOS
Disable SATA Ports 2-10
Install Proxmox using whatever installation media you have (in my case a USB Key)
Note here that the Proxmox GUI installer cant create a ZFS pool > 8 drives (This is why I disabled the drives... I'll just create it manually in a few minutes anyways)
I just nexted all the way through the installer (obviously you need to set a root password etc...)
ProTip: You will be using the console a LOT... Make sure you a.) remember the root password you set and b.) give yourself a static IP that you can remember easily...
Once Proxmox is installed and you are about to reboot, take a moment to go back into the bios and re-enable drives 2-9 (Note here that my Windows SSD is on Port 10 and will remain DISABLED from this point forward)
From here on out we're going to use putty (or whatever your preferred terminal software is...) from another computer...
SSH to your proxmox host and log in with your root credentials
Maybe take a moment to create another account and use sudo for any commands that require elevation (I'm lazy and just used root...)
----------------- ZFS Pool Configuration -----------------
Ok, first off you would think that /dev/sda = SATA1, /dev/sdb = SATA2, etc... Well apparently that isnt the case... Lets figure out which drivse you can use in your ZFS Pool...
Commands:
1.) lsblk lists the drives you have connected. Really you are just using it to determine which drive is your OS drive so you dont include it in your ZFS pool (or else you will get an error). Its pretty obvious which one is your OS drive because it will have several partitions the gimme is looking for the drive with a mountpoint of /
This is what mine looks like:
2.) the zpool create command is pretty simple: zpool create [Name of your zpool] [Type of zpool without this it assumes raid0] [drive1] [drive2] [drive3] [drive...]
Note that I did not include /dev/sdd because for whatever reason SATA1 = /dev/sdd (this is why we use lsblk above)
3.) Enable ZFS Compression (all signs point to this being a good thing... do your own research...)
4.) Add the following to /etc/modprobe.d/zfs.conf (this will limit ram usage for ZFS to using between 4 and 8GB otherwise ZFS will eat ALL your RAM in short order)
If vi is not something you are familiar with check out this guide here: http://heather.cs.ucdavis.edu/~matloff/UnixAndC/Editors/ViIntro.html
(We are going to use vi A LOT)
5.) zpool Status just lets you know the status of your zpool... you will either see your drives or that you dont have a zpool...
6.) I'm a fan of reboots...
CPU - Intel 5820K (Haswell-E)
RAM - 64GB
Motherboard - Asus X99 Deluxe (1st edition)
Video Card 1 - Nvidia Quadro FX 580 (PCI Express Slot 1)
Video Card 2 - Sapphire Nitro R9 380 (PCI Express Slot 3)
Keyboard 1 - Logitech K400 (KB w/ Trackpad built in)
Keyboard 2 - Corsair K70
Mouse - Logitech G502
Several Monitors...
9x 128GB SSDs 1x for Proxmox/Isos 8x for ZFS Raid 0 (Yes.. I know...)
1x 240GB SSD (Running Windows 10 as a fallback drive disabled in the BIOS)
Bios Configuration -
CSM - Auto (all subsections set to EFI first)
Intel VT-X - Enabled
Intel VD-D - Enabled
ACS Control - Enabled (I don't know if this is required but...)
Proxmox VE version: 4.3
----------------- Possibly skip this section -----------------
This is my production machine so I need to have a good fallback in case this takes a whole weekend...
The idea is to disable specific drives in the BIOS so that I can flip back and fourth without Proxmox toching Windows and without Windows touching Proxmox. (think disaster recovery or if my fancy ZFS pool craps the bed)
Proxmox will be installed to and run from the Samsung drive connected to SATA Port 1
A ZFS pool will be created to run on the drives running on SATA Ports 2-9
Windows will run from the Intel Drive connected to SATA Port 10
To set this up correctly I did the following:
Installed all of my SSDs taking note of where everything is plugged in. I know my Intel SSD is plugged into SATA10. I went into the BIOS and disabled SATA Ports 1-9. I then booted off my trusty Windows 10 USB key and did a standard install of Windows 10...
Once Windows was installed and running again...
Back to the BIOS!
Re-Enable SATA Drives 1-9 (Note at this point all drives are enabled)
Boot back into Windows
Bonus because I'm lazy... All my SSD's had data on them and from experience its just easier for me to wipe them using Diskpart in Windows... You can skip this part if your drives are already blank...
Use Diskpart to wipe all of the Samsung SSDs
Diskpart (from an admin command prompt)
List Disk
(note the disk numbers of your non Intel Drives)
For each drive:
Select Disk X
Clean
Repeat until all of the Samsung drives are wiped
DONT Skip this Step: go to Device Manager and DISABLE all of the Samsung Drives. (My logic here is that I dont want windows to even think about touching my ZFS pool.)
----------------- Install Proxmox -----------------
Install Proxmox:
So let me start off with a disclaimer... I suck at working with Linux...
This is probably me being anal here but it is important to me (especially with so many drives) to know exactly which drive(s) contain operating systems, because of this (and because this burned me previously) I go into the BIOS and disable ALL drives except the drive I am installing an operating system on. In this case we're installing Proxmox on SATA Port 1
Go back into the BIOS
Disable SATA Ports 2-10
Install Proxmox using whatever installation media you have (in my case a USB Key)
Note here that the Proxmox GUI installer cant create a ZFS pool > 8 drives (This is why I disabled the drives... I'll just create it manually in a few minutes anyways)
I just nexted all the way through the installer (obviously you need to set a root password etc...)
ProTip: You will be using the console a LOT... Make sure you a.) remember the root password you set and b.) give yourself a static IP that you can remember easily...
Once Proxmox is installed and you are about to reboot, take a moment to go back into the bios and re-enable drives 2-9 (Note here that my Windows SSD is on Port 10 and will remain DISABLED from this point forward)
From here on out we're going to use putty (or whatever your preferred terminal software is...) from another computer...
SSH to your proxmox host and log in with your root credentials
Maybe take a moment to create another account and use sudo for any commands that require elevation (I'm lazy and just used root...)
----------------- ZFS Pool Configuration -----------------
Ok, first off you would think that /dev/sda = SATA1, /dev/sdb = SATA2, etc... Well apparently that isnt the case... Lets figure out which drivse you can use in your ZFS Pool...
Commands:
Code:
lsblk
zpool create Tank /dev/sda /dev/sdb /dev/sdc /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi
zfs set compression=on Tank
vi /etc/modprobe.d/zfs.conf
zpool status
reboot
1.) lsblk lists the drives you have connected. Really you are just using it to determine which drive is your OS drive so you dont include it in your ZFS pool (or else you will get an error). Its pretty obvious which one is your OS drive because it will have several partitions the gimme is looking for the drive with a mountpoint of /
This is what mine looks like:
Code:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 119.2G 0 disk
├─sda1 8:1 0 119.2G 0 part
└─sda9 8:9 0 8M 0 part
sdb 8:16 0 119.2G 0 disk
├─sdb1 8:17 0 119.2G 0 part
└─sdb9 8:25 0 8M 0 part
sdc 8:32 0 119.2G 0 disk
├─sdc1 8:33 0 119.2G 0 part
└─sdc9 8:41 0 8M 0 part
sdd 8:48 0 119.2G 0 disk
├─sdd1 8:49 0 1007K 0 part
├─sdd2 8:50 0 127M 0 part
└─sdd3 8:51 0 119.1G 0 part
├─pve-root 251:0 0 29.8G 0 lvm /
├─pve-swap 251:1 0 8G 0 lvm [SWAP]
├─pve-data_tmeta 251:2 0 68M 0 lvm
│ └─pve-data 251:4 0 66.5G 0 lvm
└─pve-data_tdata 251:3 0 66.5G 0 lvm
└─pve-data 251:4 0 66.5G 0 lvm
sde 8:64 0 119.2G 0 disk
├─sde1 8:65 0 119.2G 0 part
└─sde9 8:73 0 8M 0 part
sdf 8:80 0 119.2G 0 disk
├─sdf1 8:81 0 119.2G 0 part
└─sdf9 8:89 0 8M 0 part
sdg 8:96 0 119.2G 0 disk
├─sdg1 8:97 0 119.2G 0 part
└─sdg9 8:105 0 8M 0 part
sdh 8:112 0 119.2G 0 disk
├─sdh1 8:113 0 119.2G 0 part
└─sdh9 8:121 0 8M 0 part
sdi 8:128 0 119.2G 0 disk
├─sdi1 8:129 0 119.2G 0 part
└─sdi9 8:137 0 8M 0 part
zd0 230:0 0 40G 0 disk
├─zd0p1 230:1 0 500M 0 part
└─zd0p2 230:2 0 39.5G 0 part
zd16 230:16 0 150G 0 disk
├─zd16p1 230:17 0 450M 0 part
├─zd16p2 230:18 0 100M 0 part
├─zd16p3 230:19 0 16M 0 part
└─zd16p4 230:20 0 149.5G 0 part
2.) the zpool create command is pretty simple: zpool create [Name of your zpool] [Type of zpool without this it assumes raid0] [drive1] [drive2] [drive3] [drive...]
Note that I did not include /dev/sdd because for whatever reason SATA1 = /dev/sdd (this is why we use lsblk above)
3.) Enable ZFS Compression (all signs point to this being a good thing... do your own research...)
4.) Add the following to /etc/modprobe.d/zfs.conf (this will limit ram usage for ZFS to using between 4 and 8GB otherwise ZFS will eat ALL your RAM in short order)
Code:
options zfs zfs_arc_min=4294967296
options zfs zfs_arc_max=8589934592
If vi is not something you are familiar with check out this guide here: http://heather.cs.ucdavis.edu/~matloff/UnixAndC/Editors/ViIntro.html
(We are going to use vi A LOT)
5.) zpool Status just lets you know the status of your zpool... you will either see your drives or that you dont have a zpool...
6.) I'm a fan of reboots...