I am very new to Proxmox, so please bear with me.
I have been running a small home network for the past 12 years, first with Windows Home Server, then starting 3 years ago with Free/TrueNAS. My first FreeNAS system ran on old consumer grade hardware (Core 2 Duo circa 2009). Early last year I decided to build a true server to support FreeNAS, so purchased the following:
Installed FreeNAS, set it all up to support my shares and Plex snd a couple of bhyve virtual machines and everything runs fine.
However, I noticed the system was not getting utilized very well. It sits idle for 90 percent of the time. Plus I've been running a PiHole on a Raspberry Pi and it has a tendency to fail at the most inopportune times (I tried to virtualize the PiHole, but it didnt work very well in a FreeNAS jail). So I started looking at both Proxmox and XCP-NG to be able to virtulize my whole server. After testing both Proxmox and XCP-NG, I decided that Proxmox was best for what I wanted to do. Now I have some questions I cannot seem to find the answers for.
In the above hardware configuration, I replaced the Adata NVMe with a WD 500GB NVMe and an additional 1TB NVMe in the second M.2 slot and added 4 new Seagate IronWolf 4TB drives, keeping the originals as a backup and then later on as spare drives.
The IronWolf drives are in a ZFS RaidZ2 pool.
I installed Proxmox onto the WD 500GB NVMe. I then created my first two VMs; first a very small (32GB) Debian 11.1 Linux VM into which I installed my PiHole software. Then another small (32Gb) VM into which I installed TrueNAS 12.0-U5.1.
Using the Proxmox documentation, I set up the 4 IronWolf drives to be passed through directly to the TrueNAS VM. Then from within TrueNAS imported the ZFS pool, did some small reconfiguration of the TrueNAS system and was up and running with only a couple of small concerns. Both concerns had to do which how the ZFS disks were presented to TrueNAS. I could neither see the disks from within TrueNAS nor see the disk temperature information. I thought nothing of this due to the way it seems most virtualization software seems to handle disk pass through.
Later I read something about attaching the disk serial numbers to be able to more easily identify a disk that might need replacing. So, I manually edited the /etc/pve/qemu-server/vmid.conf file to add the ",serial=XXXXXXXX" to the end of the "/dev/disk/by-id" lines. Then rebooted the VM. Now after the reboot, from withing the FreeNAS GUI, under Storage then Disks, I can now see each of my disks (DA1, DA2, etc) instead of previously being blank! Thought this strange, but then wondered if some other magic command could be added to this line to also pass through the disk temperatures?
Next, I came to the part where I had the least understanding. My motherboard contains 4 Intel 1Gb NICs. My network contains 2 unmanaged switches. What I would like to do is to be able to use different NICs for different VMs. Does this make sense. Like one NIC for the management port, one NIC for my PiHole (dhcp and dns server) and the other two NICs for the rest of the VMs. Using the documentation, I tried creating a pool of two NICs for failover, but that didn't work well (slower speed). I cannot create a LAGG as my switches do not support it nor do they support VLANS. So what are my best options to more fully utilize my four network cards?
Now storage. I've got both the 500GB and 1TB NVMe cards and Proxmox sees both, but does not use the 1TB. I would like to add the 1TB card and make it the default for my VMs. Also, I would like to move the VMs I've already created over to this larger drive. I have read the documentation regarding Storage, but I cannot make hide nor hair of it. Any help in this area would be really appreciated.
As this project goes on, I'm sure I will have further questions as well. While I'm not as up on Linux as I would like to be, I'm not a dummy when it comes to computers. I spent almost 40 years as a software engineer, first with big iron (IBM and Honeywell), then with Digital Equipment PDP-11s and VAXen and in the years before my retirement, managed my companies email system (Microsoft Exchange Server), firewalls (Juniper and Check Point) and UNIX systems (Digital Ultrix and IBM AIX). I've been retired for 16 years, but try to keep myself current with what's going on in IT.
Thanks for any help offered.
Greg ...
I have been running a small home network for the past 12 years, first with Windows Home Server, then starting 3 years ago with Free/TrueNAS. My first FreeNAS system ran on old consumer grade hardware (Core 2 Duo circa 2009). Early last year I decided to build a true server to support FreeNAS, so purchased the following:
Asus P11C-M/4L motherboard
Corsair RM650x 80 Plus Gold power supply
Intel Xeon E-2236 3.4GHz CPU (6 cores, 12 threads)
64 GB DDR4 @ 2666 ECC memory
4 x 4TB Seagate IronWolf drives
256MB Adata XPG SX 6000 M.2 PCIe NVMe (boot drive)
plus a server case to hold all of the above
Installed FreeNAS, set it all up to support my shares and Plex snd a couple of bhyve virtual machines and everything runs fine.
However, I noticed the system was not getting utilized very well. It sits idle for 90 percent of the time. Plus I've been running a PiHole on a Raspberry Pi and it has a tendency to fail at the most inopportune times (I tried to virtualize the PiHole, but it didnt work very well in a FreeNAS jail). So I started looking at both Proxmox and XCP-NG to be able to virtulize my whole server. After testing both Proxmox and XCP-NG, I decided that Proxmox was best for what I wanted to do. Now I have some questions I cannot seem to find the answers for.
In the above hardware configuration, I replaced the Adata NVMe with a WD 500GB NVMe and an additional 1TB NVMe in the second M.2 slot and added 4 new Seagate IronWolf 4TB drives, keeping the originals as a backup and then later on as spare drives.
The IronWolf drives are in a ZFS RaidZ2 pool.
I installed Proxmox onto the WD 500GB NVMe. I then created my first two VMs; first a very small (32GB) Debian 11.1 Linux VM into which I installed my PiHole software. Then another small (32Gb) VM into which I installed TrueNAS 12.0-U5.1.
Using the Proxmox documentation, I set up the 4 IronWolf drives to be passed through directly to the TrueNAS VM. Then from within TrueNAS imported the ZFS pool, did some small reconfiguration of the TrueNAS system and was up and running with only a couple of small concerns. Both concerns had to do which how the ZFS disks were presented to TrueNAS. I could neither see the disks from within TrueNAS nor see the disk temperature information. I thought nothing of this due to the way it seems most virtualization software seems to handle disk pass through.
Later I read something about attaching the disk serial numbers to be able to more easily identify a disk that might need replacing. So, I manually edited the /etc/pve/qemu-server/vmid.conf file to add the ",serial=XXXXXXXX" to the end of the "/dev/disk/by-id" lines. Then rebooted the VM. Now after the reboot, from withing the FreeNAS GUI, under Storage then Disks, I can now see each of my disks (DA1, DA2, etc) instead of previously being blank! Thought this strange, but then wondered if some other magic command could be added to this line to also pass through the disk temperatures?
Next, I came to the part where I had the least understanding. My motherboard contains 4 Intel 1Gb NICs. My network contains 2 unmanaged switches. What I would like to do is to be able to use different NICs for different VMs. Does this make sense. Like one NIC for the management port, one NIC for my PiHole (dhcp and dns server) and the other two NICs for the rest of the VMs. Using the documentation, I tried creating a pool of two NICs for failover, but that didn't work well (slower speed). I cannot create a LAGG as my switches do not support it nor do they support VLANS. So what are my best options to more fully utilize my four network cards?
Now storage. I've got both the 500GB and 1TB NVMe cards and Proxmox sees both, but does not use the 1TB. I would like to add the 1TB card and make it the default for my VMs. Also, I would like to move the VMs I've already created over to this larger drive. I have read the documentation regarding Storage, but I cannot make hide nor hair of it. Any help in this area would be really appreciated.
As this project goes on, I'm sure I will have further questions as well. While I'm not as up on Linux as I would like to be, I'm not a dummy when it comes to computers. I spent almost 40 years as a software engineer, first with big iron (IBM and Honeywell), then with Digital Equipment PDP-11s and VAXen and in the years before my retirement, managed my companies email system (Microsoft Exchange Server), firewalls (Juniper and Check Point) and UNIX systems (Digital Ultrix and IBM AIX). I've been retired for 16 years, but try to keep myself current with what's going on in IT.
Thanks for any help offered.
Greg ...