Intel Nuc 10th Gen i7 + 2TB NVME initial setup suggestions

Hi,
I’ve just ordered an Intel Nuc 10th Gen i7 (6 cores) with 32GB of RAM and a 2TB NVME disk to build my fist proxmox VE box. I’m going to install as additional package also the backup server since I can’t use a dedicated hardware for it.
I‘m thinking about adding in the next weeks an additional 1TB SSD internal disk connected to the spare SATA port for backups.

I’d like to have some suggestions to start in the best way possible with the setup.
Will the default installation done via bootable dongle be fine if I’d like to use the backup server to backup VMs (via quick snapshots if possible and not full backups) to an external NFS share at first and on the connected SATA disk as soon as I’ll buy it (to have better performance)?
Is the file system chosen by default install the correct one to use or should I change something during the installation? Would you suggest me to reduce the size of the local-lvm volume if the default installation uses the full 2TB drive and increase its size later if needed?
Is it easy to add the additional drive and configure it as a backup target? Should I configure it from VE, from backup server or from Cli?

If not suggested otherwise, I’m going to install the latest stable version available (Proxmox VE 7.2).

Thanks in advance for any hint or link, I hope I’ll be fine with my first Proxmox@home ;)
 
Will the default installation done via bootable dongle be fine if I’d like to use the backup server to backup VMs (via quick snapshots if possible and not full backups) to an external NFS share at first and on the connected SATA disk as soon as I’ll buy it (to have better performance)?
Yes, this is fine. You have to add the PBS repository after installation though to install PBS, as shown here https://pbs.proxmox.com/docs/installation.html
Is the file system chosen by default install the correct one to use or should I change something during the installation? Would you suggest me to reduce the size of the local-lvm volume if the default installation uses the full 2TB drive and increase its size later if needed?
Without knowing anything about your usecases it is hard to give any recommendations here. What kind of data/VMs/containers are you planning to store/run on your node?

Is it easy to add the additional drive and configure it as a backup target? Should I configure it from VE, from backup server or from Cli?
This can easily be done from the PBS UI.
 
Yes, this is fine. You have to add the PBS repository after installation though to install PBS, as shown here https://pbs.proxmox.com/docs/installation.html

Without knowing anything about your usecases it is hard to give any recommendations here. What kind of data/VMs/containers are you planning to store/run on your node?


This can easily be done from the PBS UI.
Hi Wagner, thanks for your reply. I definitely understand that it is difficult to give reccomentations without the use-cases. At the beginning I will not have big VMs but I think about 8-10 Ubuntu Server VMs to build a Kubernetes test cluster (each about 20-30GB f storage). I will have also have an Ubuntu or Fedora client VM (30-50GB) with GUI and a Windows VM (60GB) (this wouldn't be an always on vm, but only powered up when necessary).
 
I guess then I'd stick with the default LVM setup, but that's just a matter of preference.
 
  • Like
Reactions: MightySlaytanic
I guess then I'd stick with the default LVM setup, but that's just a matter of preference.
I was not considering ZFS as an option since I always read that it is ram hungry and I'll have only 32Gb of ram.

BTW, will I be able with the standard LVM setup to do automatic incremental backups to the secondary SSD disk or will they be always full backups? Should i configure the secondary SSD disk with ext4 and mount as a standard directory as in the two attached screenshots?
 

Attachments

  • B6762259-AD35-40E8-9889-2E7655FCA5D3.jpeg
    B6762259-AD35-40E8-9889-2E7655FCA5D3.jpeg
    870.6 KB · Views: 8
  • 24B42BE3-C831-4A85-AE3B-61A77F1ABF13.png
    24B42BE3-C831-4A85-AE3B-61A77F1ABF13.png
    968.7 KB · Views: 8
I was not considering ZFS as an option since I always read that it is ram hungry and I'll have only 32Gb of ram.

With 32GB of RAM, ZFS should not be an issue. We recommend 1GB of RAM for every 1TB or storage: https://pve.proxmox.com/wiki/System_Requirements

BTW, will I be able with the standard LVM setup to do automatic incremental backups to the secondary SSD disk or will they be always full backups? Should i configure the secondary SSD disk with ext4 and mount as a standard directory as in the two attached screenshots?
You'll have to format and mount the secondary SSD in some way, either from the web UI, or from the CLI. If you intend to use PBS, you have to create a new data store in the directory where you mounted the SSD. In PVE, you then can add the PBS instance as a storage for backups.

Regarding backups in PVE in general: If your backup target is anything other as an instance of PBS, you'll always have full backups. If the target is PBS, the backup is incremental and deduplicated.
 
  • Like
Reactions: MightySlaytanic
With 32GB of RAM, ZFS should not be an issue. We recommend 1GB of RAM for every 1TB or storage: https://pve.proxmox.com/wiki/System_Requirements


You'll have to format and mount the secondary SSD in some way, either from the web UI, or from the CLI. If you intend to use PBS, you have to create a new data store in the directory where you mounted the SSD. In PVE, you then can add the PBS instance as a storage for backups.

Regarding backups in PVE in general: If your backup target is anything other as an instance of PBS, you'll always have full backups. If the target is PBS, the backup is incremental and deduplicated.
Thank you very much ! I suppose I'll be able to configure the secondary SSD both via PVE GUI and PBS GUI, which will be on the same server.
 
With 32GB of RAM, ZFS should not be an issue. We recommend 1GB of RAM for every 1TB or storage: https://pve.proxmox.com/wiki/System_Requirements
Hi Wagner, I’m going through the guide and I’ve just read the small chapter about zfs where it explains that that it will use at most half of the ram but it will release it to VMs if needed, and a minimum of 2GB+1GB per TB of storage should be considered as needed by zfs. So, if I enable it (it seems such a good piece of technology that I’d like to enable it, maybe one day I will use some of its features) how much ram can I consider available for VMs considering PVE+PBS+ZFS memory requirements? About (32-5)GB of ram?

I’ve also read a the section about swap and possible issues with ZFS.. is something I should care of during first install in order to reserve some GBs (the same amount as the ram, 32?) to have swap outside a Zvol? Is something the installation will take care of automagically?

Would you suggest me, with my hardware, to enable compression on zfs?

Final question.. I’ve understood that with ZFS only raw images are possible.. what are the drawbacks of raw images compared to qcow2 ones? Do raw images use all the allocated space from the start of or are they thin provisioned as the VM writes to the disk due to ZFS features?

Please excuse me if some questions seem trivial, I’m trying to understand as much as I can before my NUC arrives, since I can’t wait to have Proxmox up and running on it to carry on with my CKA studies BTW I’m also going through the official guide but some nOOb doubts survive
 
Hi Wagner, I’m going through the guide and I’ve just read the small chapter about zfs where it explains that that it will use at most half of the ram but it will release it to VMs if needed, and a minimum of 2GB+1GB per TB of storage should be considered as needed by zfs. So, if I enable it (it seems such a good piece of technology that I’d like to enable it, maybe one day I will use some of its features) how much ram can I consider available for VMs considering PVE+PBS+ZFS memory requirements? About (32-5)GB of ram?
Bigger problem is the SSD wear and performance in the case of QLC SSDs. ZFS got great features and data integrity but this comes at the cost of much overhead. So don't wonder if your SSD wears 3 times faster than compared to LVM-Thin. Generally, it's recommended to use Enterprise SSDs with ZFS that got Powerloss Protection. Might work with consumer SSDs too, but if that SSD survives only some months or some years really depends on your workload. With just a single consumer SSD I would stick with VM/LVM-Thin.

And you might not get 27GB (32-5) for oyur guests. PVE+PBS needs 2-3GB, ZFSs ARC needs some GBs (2GB should be ok, the more RAM you use, the faster the pool will be). Also keep in mind that you want some RAM unused. If the RAM exceeds 80% KSM will kick in (deduplicating the RAM and making the system slower) and ballooning will kick in (stealing RAM from guests). And PVE will make use of unused RAM for linux page file caching to speed up disk reads. And in case of ZFS you get no swap partition, so if you run out of RAM the OOM killer will just kill processes to free up RAM (usually killing some guests). So I would plan with additional 10-20% RAM not used by guests.
Then there is disk caching for the VMs. If you for example choose "writeback" as cache mode for a VM, then the VM will use the hosts RAM for caching. So this comes on top of the RAM you assigned to that VM. And VMs will be run by KVM and the KVM process itself needs RAM too. So there is some additional overhead. If you just assign 8GB RAM to a VM, the process might actually use something more like 9GB, because of the overhead.
So not that easy to dimension RAM for your guests...
I’ve also read a the section about swap and possible issues with ZFS.. is something I should care of during first install in order to reserve some GBs (the same amount as the ram, 32?) to have swap outside a Zvol? Is something the installation will take care of automagically?
When using ZFS PVE won't create a swap partition. You need to edit the "hdsize" in the "Advanced Options" in the installer and lower it by some GBs. That way the installer will leave some GBs unallocated and you can manually create a swap partition later using he CLI.
Would you suggest me, with my hardware, to enable compression on zfs?
LZ4 compression is enabled by default and when using SATA/SAS it's always a good idea to keep it enabled. You are usually limited by disk performance and not by CPU performance (except if you got a really fast NVMe or a crappy Atom CPU). So LZ4 compression will make the storage faster, as fewer data has to be read/written to the disk.
Final question.. I’ve understood that with ZFS only raw images are possible.. what are the drawbacks of raw images compared to qcow2 ones?
One downside is, that rolling back a snapshot with ZFS+raw will wipe all data created/edited after that snapshot was created. No problem with qcow2. But qcow2 on top of ZFS creates even more overhead.
Do raw images use all the allocated space from the start of or are they thin provisioned as the VM writes to the disk due to ZFS features?
Depends. When enabling the "thin provisioning" checkbox it will be thin provisioned, so only uses the space that's actually consumed by data.
 
Bigger problem is the SSD wear and performance in the case of QLC SSDs. ZFS got great features and data integrity but this comes at the cost of much overhead. So don't wonder if your SSD wears 3 times faster than compared to LVM-Thin. Generally, it's recommended to use Enterprise SSDs with ZFS that got Powerloss Protection. Might work with consumer SSDs too, but if that SSD survives only some months or some years really depends on your workload. With just a single consumer SSD I would stick with VM/LVM-Thin.
The SSD should be a TLC one with max 3000MB/s rw speed on the Gen3 nvme port of the NUC.. btw with what you said I understand that for my use cases I should stick with lvm-thin (as Wagner suggested for my use cases), ZFS a has great features but could bring me more problems (fewer resources for VMs) than benefits.
Thank you for the detailed answer!
 
Last edited:
Yes, this is fine. You have to add the PBS repository after installation though to install PBS, as shown here https://pbs.proxmox.com/docs/installation.html

In case the nvme drive breaks and I have to reinstall PVE and PBS I should then have no problems in recovering my VMs from the secondary ssd drive, they will be recognized as backups by the new PBS installation, right? Does PBS save only the disk of the VM or also VM configuration data, thus allowing me to recover all the VMs without needing to reconfigure them before restoring their data or do I need to backup something else too in order to be able to recreate from backups all the VMs that where on the failed drive?

I think I’ll go through the user manual of PBS too as soon as I can ;)
 
Last edited:
In case the nvme drive breaks and I have to reinstall PVE and PBS I should then have no problems in recovering my VMs from the secondary ssd drive, they will be recognized as backups by the new PBS installation, right?
Yes, but there is no "import datastore" feature implemented yet. You will have to manually edit the /etc/proxmox-backup/datastore.cfg.
Does PBS save only the disk of the VM or also VM configuration data, thus allowing me to recover all the VMs without needing to reconfigure them before restoring their data or do I need to backup something else too in order to be able to recreate from backups all the VMs that where on the failed drive?
Disks and VM config. But keep in mind to also backup your PVE hosts configs. Big part of the VMs firewall settings is part of the host config, not the VM config. Like aliases, ip sets, security groups.
I think I’ll go through the user manual of PBS too as soon as I can ;)
Instead of installing PBS bare metal you can also install it in a VM or LXC. Benefit would be, that you could use Vzdump to backup that PBS VM/LXC on a NAS or external disk. In case you need to reinstall PVE you could then first restore the backup of the PBS VM/LXC from the vdump archive file and with PBS then running restore all other guests from the PBS.
 
Yes, but there is no "import datastore" feature implemented yet. You will have to manually edit the /etc/proxmox-backup/datastore.cfg.

Disks and VM config. But keep in mind to also backup your PVE hosts configs. Big part of the VMs firewall settings is part of the host config, not the VM config. Like aliases, ip sets, security groups.

Instead of installing PBS bare metal you can also install it in a VM or LXC. Benefit would be, that you could use Vzdump to backup that PBS VM/LXC on a NAS or external disk. In case you need to reinstall PVE you could then first restore the backup of the PBS VM/LXC from the vdump archive file and with PBS then running restore all other guests from the PBS.
This is quite interesting.. I was thinking about installing PBS as additional packages on PVE (I can’t have an additional server) for performance reasons but maybe for my use case the PBS in VM solution could be better for what you say.. I could use the secondary SATA SSD to backup VMs via PBS by assigning it as raw disk to the PBS VM and then mount an NFS share on my NAS to backup the PBS VM.. so, in case of disaster after reconfiguring PVE I could simply restore the PBS VM from NFS, if I’ve understood correctly..

It’s hard for me to understand if it’s better to have PBS installed as additional package on PVE and “simply” restore its configuration files (if I understand which they are) from an offsite backup to waste less resources and have better backup performances. This way seems better on my setup.

Is there some page where I can understand what should be backed up to restore PVE or PBS config? Is there a sort of config backup export functionality in those two so that I can schedule their backup with an. Cron job and in case of disaster I can simply install from scratch and restore config from the exported files?

Additional doubt: I’ve watched some videos about snapshots and backups and I’ve seen that to restore a backup they select the vm, go in its backup tab and then do a restore.. in case of disaster where you don’t have the VMs defined on PVE, are you still able to select their backup from another section in the PVE gui and restore it, thus re-creating the VM from PBS?
 
Last edited:
This is quite interesting.. I was thinking about installing PBS as additional packages on PVE (I can’t have an additional server) for performance reasons but maybe for my use case the PBS in VM solution could be better for what you say.. I could use the secondary SATA SSD to backup VMs via PBS by assigning it as raw disk to the PBS VM and then mount an NFS share on my NAS to backup the PBS VM.. so, in case of disaster after reconfiguring PVE I could simply restore the PBS VM from NFS, if I’ve understood correctly..
Jup. And also might not be a bad idea to backup your PVE hosts "/etc" to your NAS. For example by a daily rsync cronjob to a NFS/SMB share. So in case you lose your PVE OS, you at least got recent backups of your configs. Doing PVE backups to your NAS is also an option using clonezilla. But that has to be done on block level while PVE isn't running and you will have to backup whole partitions. Not great with the default PVE partitioning because host's root filesystem shares the same partition with your guest virtual disks. Would be wasted space when backing up guest disks with clonezilla and PBS.
It’s hard for me to understand if it’s better to have PBS installed as additional package on PVE and “simply” restore its configuration files (if I understand which they are) from an offsite backup to waste less resources and have better backup performances. This way seems better on my setup.
LXCs share the kernel with the host. They are not virtualized and when running PBS inside a LXC it's basically also running on the host...just with an additional isolation layer. So a PBS LXC shouldn`t waste ressources.
Is there some page where I can understand what should be backed up to restore PVE or PBS config? Is there a sort of config backup export functionality in those two so that I can schedule their backup with an. Cron job and in case of disaster I can simply install from scratch and restore config from the exported files?
There is no import/export for configs yet. Best you backup the whole "/etc" folder on PVE and PBS. The "/etc" folder is where Linux stores configs.
Most (but not all) configs are stored in "/etc/pve" and "/etc/proxmox-backup".
Additional doubt: I’ve watched some videos about snapshots and backups and I’ve seen that to restore a backup they select the vm, go in its backup tab and then do a restore.. in case of disaster where you don’t have the VMs defined on PVE, are you still able to select their backup from another section in the PVE gui and restore it, thus re-creating the VM from PBS?
There are a lot of people who don't get this. Then they create a new blank VM with the same VMID, select the blank VMs backup tab and restore the VM from backup overwriting the blank VM.
That's all not needed. You just need to select your PBS/Vzdump storage in PVE. There is also a "backups" tab which will list all backups of all guests. In case of a fresh PVE installation you will of cause have to add this PBS/Vzdump storage again (can be done at Datacenter -> Storage -> Add -> "Proxmox Backup Server" or "CIFS" or "NFS").
 
LXCs share the kernel with the host. They are not virtualized and when running PBS inside a LXC it's basically also running on the host...just with an additional isolation layer. So a PBS LXC shouldn`t waste ressources.

In case of a PBS on LXC setup should I simply create a Debian LXC and within that install PBS or is there any PBS LXC container available? Then I’ll mount (storage backed, bind or device mount?) the secondary SATA SSD to the LXC container and use it as backup target? Are LXC containers like Debian ones meant to be upgraded like standard VMs or should I setup a new LXC container when a new Debian version comes out, thus needing to reconfigure everything from scratch? I’m used with standard docker containers where config and other data is persisted with external volumes mounted into the container and upgrading the OS is done through the re-creation of the container with an upgraded image, I’ve never used LXC containers.
 
Last edited:
In case of a PBS on LXC setup should I simply create a Debian LXC and within that install PBS or is there any PBS LXC container available?
Jup, Debian LXC and then following this: https://pbs.proxmox.com/docs/installation.html#install-proxmox-backup-server-on-debian
Then I’ll mount (storage backed, bind or device mount?) the secondary SATA SSD to the LXC container and use it as backup target?
Jup. You can mount that SSD on the host and then bind-mount its mountpoint into the LXC.
Are LXC containers like Debian ones meant to be upgraded like standard VMs or should I setup a new LXC container when a new Debian version comes out, thus needing to reconfigure everything from scratch?
You can upgrade them using apt like you would do for a Debian VM.
I’m used with standard docker containers where config and other data is persisted with external volumes mounted into the container and upgrading the OS is done through the re-creation of the container with an upgraded image, I’ve never used LXC containers.
Yeah, thats not how LXCs work. LXC containerize a full OS, not just an application.
 
Jup, Debian LXC and then following this: https://pbs.proxmox.com/docs/installation.html#install-proxmox-backup-server-on-debian

Jup. You can mount that SSD on the host and then bind-mount its mountpoint into the LXC.

You can upgrade them using apt like you would do for a Debian VM.

Yeah, thats not how LXCs work. LXC containerize a full OS, not just an application.
Thank you again for the tons of useful info you gave me. Now I must just wait few days for the Nuc to arrive and then I’m sure I’ll be back with some more questions

Have a nice week, thank you very much!
 
Jup. You can mount that SSD on the host and then bind-mount its mountpoint into the LXC.
Would you advise me to do a bind mount instead of directly mounting the device on the LXC? Is it better because in that way I can see the volume content also within proxmox VE?

If I bind mount the mountpoint of the SSD, then within PBS do I see it as a block device that needs to be formatted or how do I see it? Mounting the SSD on the PVE host can be done via PVE GUI going in Storage View -> Datacenter -> Disks -> Directory and there mount an ext4 FS from external disk to a folder in /mnt/bindmounts/backup-device or do I need to create a lvm-thin etc on the disk?
 
Last edited:
Would you advise me to do a bind mount instead of directly mounting the device on the LXC? Is it better because in that way I can see the volume content also within proxmox VE?

If I bind mount the mountpoint of the SSD, then within PBS do I see it as a block device that needs to be formatted or how do I see it?
LXCs only work on file leveL, not block level. So you mount the filesystem on the host and then bring as much folders as you like from the host into the LXC by bind-mounting them.
And in case you choose a unprivileged LXC you might also need to edit the user remapping as described there: https://pve.proxmox.com/wiki/Unprivileged_LXC_containers
If I bind mount the mountpoint of the SSD, then within PBS do I see it as a block device that needs to be formatted or how do I see it? Mounting the SSD on the PVE host can be done via PVE GUI going in Storage View -> Datacenter -> Disks -> Directory and there mount an ext4 FS from external disk to a folder in /mnt/bindmounts/backup-device or do I need to create a lvm-thin etc on the disk?
In case you don't want to manually partition, format and mount it using the CLI thats an option. But the directory storage itself isn't needed, if you just want a ext4 filesystem for your LXCs. I usually format it manually and then add an entry to the fstab so it gets mounted at boot.
 
I’ll ask you an additional question a bit OT, just to avoid posting something maybe already asked by others. I’ve seen from the guide that you can specify the maximum and minimum amount of ram to assign to the VMs, as in the attached screenshot. From the GUI I can max ram, min ram and the balloon flag. I was looking about the configuration file syntax page and I’ve found two settings, one of which is called balloon but it’s not a flag, it is an amount of memory. Does it correspond to the minimum ram of the GUI or how do the three GUI parameters translate to the config file syntax?

Code:
memory: <integer> (16 - N) (default = 512)
Amount of RAM for the VM in MB. This is the maximum available memory when you use the balloon device.

balloon: <integer> (0 - N)
Amount of target RAM for the VM in MB. Using zero disables the ballon driver

I’m looking at the config file params since I’ll configure the VMs with terraform and if possible I’d like to set the minimum amount of ram to avoid allocating a fixed size of ram to all my kubernetes cluster VMs that will maybe use far less than the amount allocated (I’m thinking about minimum 1 and max 2 or 4GB of ram).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!