NOOB with setup questions

GTruck

Member
Dec 6, 2021
4
0
6
123
I am very new to Proxmox, so please bear with me.

I have been running a small home network for the past 12 years, first with Windows Home Server, then starting 3 years ago with Free/TrueNAS. My first FreeNAS system ran on old consumer grade hardware (Core 2 Duo circa 2009). Early last year I decided to build a true server to support FreeNAS, so purchased the following:

Asus P11C-M/4L motherboard​
Corsair RM650x 80 Plus Gold power supply​
Intel Xeon E-2236 3.4GHz CPU (6 cores, 12 threads)​
64 GB DDR4 @ 2666 ECC memory​
4 x 4TB Seagate IronWolf drives​
256MB Adata XPG SX 6000 M.2 PCIe NVMe (boot drive)​
plus a server case to hold all of the above​

Installed FreeNAS, set it all up to support my shares and Plex snd a couple of bhyve virtual machines and everything runs fine.

However, I noticed the system was not getting utilized very well. It sits idle for 90 percent of the time. Plus I've been running a PiHole on a Raspberry Pi and it has a tendency to fail at the most inopportune times (I tried to virtualize the PiHole, but it didnt work very well in a FreeNAS jail). So I started looking at both Proxmox and XCP-NG to be able to virtulize my whole server. After testing both Proxmox and XCP-NG, I decided that Proxmox was best for what I wanted to do. Now I have some questions I cannot seem to find the answers for.

In the above hardware configuration, I replaced the Adata NVMe with a WD 500GB NVMe and an additional 1TB NVMe in the second M.2 slot and added 4 new Seagate IronWolf 4TB drives, keeping the originals as a backup and then later on as spare drives.

The IronWolf drives are in a ZFS RaidZ2 pool.

I installed Proxmox onto the WD 500GB NVMe. I then created my first two VMs; first a very small (32GB) Debian 11.1 Linux VM into which I installed my PiHole software. Then another small (32Gb) VM into which I installed TrueNAS 12.0-U5.1.

Using the Proxmox documentation, I set up the 4 IronWolf drives to be passed through directly to the TrueNAS VM. Then from within TrueNAS imported the ZFS pool, did some small reconfiguration of the TrueNAS system and was up and running with only a couple of small concerns. Both concerns had to do which how the ZFS disks were presented to TrueNAS. I could neither see the disks from within TrueNAS nor see the disk temperature information. I thought nothing of this due to the way it seems most virtualization software seems to handle disk pass through.

Later I read something about attaching the disk serial numbers to be able to more easily identify a disk that might need replacing. So, I manually edited the /etc/pve/qemu-server/vmid.conf file to add the ",serial=XXXXXXXX" to the end of the "/dev/disk/by-id" lines. Then rebooted the VM. Now after the reboot, from withing the FreeNAS GUI, under Storage then Disks, I can now see each of my disks (DA1, DA2, etc) instead of previously being blank! Thought this strange, but then wondered if some other magic command could be added to this line to also pass through the disk temperatures?

Next, I came to the part where I had the least understanding. My motherboard contains 4 Intel 1Gb NICs. My network contains 2 unmanaged switches. What I would like to do is to be able to use different NICs for different VMs. Does this make sense. Like one NIC for the management port, one NIC for my PiHole (dhcp and dns server) and the other two NICs for the rest of the VMs. Using the documentation, I tried creating a pool of two NICs for failover, but that didn't work well (slower speed). I cannot create a LAGG as my switches do not support it nor do they support VLANS. So what are my best options to more fully utilize my four network cards?

Now storage. I've got both the 500GB and 1TB NVMe cards and Proxmox sees both, but does not use the 1TB. I would like to add the 1TB card and make it the default for my VMs. Also, I would like to move the VMs I've already created over to this larger drive. I have read the documentation regarding Storage, but I cannot make hide nor hair of it. Any help in this area would be really appreciated.

As this project goes on, I'm sure I will have further questions as well. While I'm not as up on Linux as I would like to be, I'm not a dummy when it comes to computers. I spent almost 40 years as a software engineer, first with big iron (IBM and Honeywell), then with Digital Equipment PDP-11s and VAXen and in the years before my retirement, managed my companies email system (Microsoft Exchange Server), firewalls (Juniper and Check Point) and UNIX systems (Digital Ultrix and IBM AIX). I've been retired for 16 years, but try to keep myself current with what's going on in IT.

Thanks for any help offered.

Greg ...
 
I am very new to Proxmox, so please bear with me.

I have been running a small home network for the past 12 years, first with Windows Home Server, then starting 3 years ago with Free/TrueNAS. My first FreeNAS system ran on old consumer grade hardware (Core 2 Duo circa 2009). Early last year I decided to build a true server to support FreeNAS, so purchased the following:

Asus P11C-M/4L motherboard​
Corsair RM650x 80 Plus Gold power supply​
Intel Xeon E-2236 3.4GHz CPU (6 cores, 12 threads)​
64 GB DDR4 @ 2666 ECC memory​
4 x 4TB Seagate IronWolf drives​
256MB Adata XPG SX 6000 M.2 PCIe NVMe (boot drive)​
plus a server case to hold all of the above​

Installed FreeNAS, set it all up to support my shares and Plex snd a couple of bhyve virtual machines and everything runs fine.

However, I noticed the system was not getting utilized very well. It sits idle for 90 percent of the time. Plus I've been running a PiHole on a Raspberry Pi and it has a tendency to fail at the most inopportune times (I tried to virtualize the PiHole, but it didnt work very well in a FreeNAS jail). So I started looking at both Proxmox and XCP-NG to be able to virtulize my whole server. After testing both Proxmox and XCP-NG, I decided that Proxmox was best for what I wanted to do. Now I have some questions I cannot seem to find the answers for.

In the above hardware configuration, I replaced the Adata NVMe with a WD 500GB NVMe and an additional 1TB NVMe in the second M.2 slot and added 4 new Seagate IronWolf 4TB drives, keeping the originals as a backup and then later on as spare drives.

The IronWolf drives are in a ZFS RaidZ2 pool.

I installed Proxmox onto the WD 500GB NVMe. I then created my first two VMs; first a very small (32GB) Debian 11.1 Linux VM into which I installed my PiHole software. Then another small (32Gb) VM into which I installed TrueNAS 12.0-U5.1.

Using the Proxmox documentation, I set up the 4 IronWolf drives to be passed through directly to the TrueNAS VM. Then from within TrueNAS imported the ZFS pool, did some small reconfiguration of the TrueNAS system and was up and running with only a couple of small concerns. Both concerns had to do which how the ZFS disks were presented to TrueNAS. I could neither see the disks from within TrueNAS nor see the disk temperature information. I thought nothing of this due to the way it seems most virtualization software seems to handle disk pass through.

Later I read something about attaching the disk serial numbers to be able to more easily identify a disk that might need replacing. So, I manually edited the /etc/pve/qemu-server/vmid.conf file to add the ",serial=XXXXXXXX" to the end of the "/dev/disk/by-id" lines. Then rebooted the VM. Now after the reboot, from withing the FreeNAS GUI, under Storage then Disks, I can now see each of my disks (DA1, DA2, etc) instead of previously being blank! Thought this strange, but then wondered if some other magic command could be added to this line to also pass through the disk temperatures?
I guess you used "qm set" to passthrough individual VMs. In that case that is not a physical passthrough and your disks are still virtualized so your TrueNAS can only see and work with virtual disks. If you want your TrueNAS to be able to directly access the real physical drives you need to buy a PCIe HBA card and use PCI passthrough to to passthrough the complele HBA with all disks attached to it. Then SMART will work too and you can see the temperatures and you will get no additional virtualization overhead because there will be no virtualization at all.
Next, I came to the part where I had the least understanding. My motherboard contains 4 Intel 1Gb NICs. My network contains 2 unmanaged switches. What I would like to do is to be able to use different NICs for different VMs. Does this make sense. Like one NIC for the management port, one NIC for my PiHole (dhcp and dns server) and the other two NICs for the rest of the VMs. Using the documentation, I tried creating a pool of two NICs for failover, but that didn't work well (slower speed). I cannot create a LAGG as my switches do not support it nor do they support VLANS. So what are my best options to more fully utilize my four network cards?
You can give a VM its own NIC but that only makes sense if that VMs needs the bandwidth. Your Pihole will never need more then 1% of your bandwidth so it would be a waste of hardware to give it its own NIC. But for a TrueNAS VM that could make sense because using network shares can easily saturate a complete Gbit NIC, so giving it its own NIC would prevent TrueNAS from slowing down other VMs transfers.
Also keep in mind that you dont want a host/guest to have 2 IPs in the same subnet. So if you want more than 1Gbit bandwidth with a single host/guest you could only achieve this with a bond (for example round robin if your switch doesn't support LACP).
Now storage. I've got both the 500GB and 1TB NVMe cards and Proxmox sees both, but does not use the 1TB. I would like to add the 1TB card and make it the default for my VMs. Also, I would like to move the VMs I've already created over to this larger drive. I have read the documentation regarding Storage, but I cannot make hide nor hair of it. Any help in this area would be really appreciated.
Create a new LVM-thin with that 1TB SSD (GUI: YourNode -> disks -> LVM-Thin -> create. Not sure if this will automatically add the LVM-Thin as a storage. If not go to Datacenter -> storage -> Add -> LVM-Thin and add it.

You then should see both SSDs as a VM storage and you can move the VMs between them.
But you should check how your SSDs are connected. The m.2 slot is either directly connected to the CPU (this gives full speed) or its only connected to the mainboards chipset (here you probably wont get the full bandwidth because it shares the same PCIe lanes with all SATA, NICs, USB and so on.
 
Last edited:
Dunuin, thanks for the reply.

Yes, I used the qm set command initially to add the drives to the VM.

I used the instructions in this link: https://pve.proxmox.com/wiki/Passthrough_Physical_Disk_to_Virtual_Machine_(VM)

It was my understanding that TrueNAS, or more importantly ZFS does needs full access to the hard drives otherwise problems may occur. So reading that link, certainly gave me the impression that doing what it said, did pass through the disks. After first getting the TrueNAS setup, I did a ZFS pool scrub, which took about 4 hours and did not show any problems running. In comparison, a ZFS pool scrub when TrueNAS was running bare metal, took almost exactly the same amount of time to run.

So of course this begs the question: should I run TrueNAS as a VM under Proxmox or just go back to bare metal TrueNAS?

As for the networking question. When I first installed Proxmox and then added the TrueNAS VM, on my single GB network, I was getting pretty close to gigabit transfers, but after a few days, the transfer rate seemed to drop to aroung 75 to 80 megabits. Not sure why. That's when I tried separating the network by using different NICs for different purposes. I even tried bonding using both active-backup and balance-tlb from the Networking part of the Host System Administration guide, but didn't see any difference

I tried creating a LVM-thin and could see the disk after creating it. But I could not find any instructions on how to move VMs from the local-lvm to my newly created LVM-thin. Then when I tried to remove the LVM-thin, it never seems to go away.

The whole storage and network things have me totally confused, so I think I'll give up and go back to TrueNAS on bare metal.

Thanks again for your help.

Greg .
 
Dunuin, thanks for the reply.

Yes, I used the qm set command initially to add the drives to the VM.

I used the instructions in this link: https://pve.proxmox.com/wiki/Passthrough_Physical_Disk_to_Virtual_Machine_(VM)

It was my understanding that TrueNAS, or more importantly ZFS does needs full access to the hard drives otherwise problems may occur. So reading that link, certainly gave me the impression that doing what it said, did pass through the disks. After first getting the TrueNAS setup, I did a ZFS pool scrub, which took about 4 hours and did not show any problems running. In comparison, a ZFS pool scrub when TrueNAS was running bare metal, took almost exactly the same amount of time to run.

So of course this begs the question: should I run TrueNAS as a VM under Proxmox or just go back to bare metal TrueNAS?
Jep, ZFS should have direct access to the disks without any abstraction layer in between. It will run with virtualized disks like you do it now but would better to get a HBA card (for example a 35€ Dell PERC H310/H710 flashed to IT mode) and use PCI passthrough like many people here do when virtualizing TrueNAS. Only that way your ZFS inside TrueNAS would be able to get direct/physical access to the disks.
As for the networking question. When I first installed Proxmox and then added the TrueNAS VM, on my single GB network, I was getting pretty close to gigabit transfers, but after a few days, the transfer rate seemed to drop to aroung 75 to 80 megabits. Not sure why. That's when I tried separating the network by using different NICs for different purposes. I even tried bonding using both active-backup and balance-tlb from the Networking part of the Host System Administration guide, but didn't see any difference

I tried creating a LVM-thin and could see the disk after creating it. But I could not find any instructions on how to move VMs from the local-lvm to my newly created LVM-thin. Then when I tried to remove the LVM-thin, it never seems to go away.

The whole storage and network things have me totally confused, so I think I'll give up and go back to TrueNAS on bare metal.

Thanks again for your help.
Isn't that hard but you need to be willing to learn new stuff.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!