HDSIZE of NVMe system disk during PVE installation with ZFS

They aren't mirrored. These are single partitions that are kept in sync by the proxmox-boot-tool. If you ever need to replace a disk you need to clone the partition table from the healthy to the new disk, tell the proxmox-boot-tool to sync over the bootloader and only then tell ZFS to replace the failed ZFS partition. Its explained in the wiki. But as both disks contain the same bootloaders your server still should be able to boot if either of the disks fails.
If I will just remove one SSD from my server, will my PVE boot automatically?

Linux doesn't require swap but its useful to have a little bit to prevent processes getting killed by the OOM killer in case you run out of RAM. I usually set the swappiness very low so swap is really only used to prevent OOM but not to free up RAM in normal operation so the SSDs life longer. If you only got 16GB of RAM 2GB swap would be totally sufficient for that.
If I will increase RAM to 64Gb what swap size will be right?

So you can use fdisk or parted to create one (or two, one on each disk) yourself and then add it as a swap partition using fstab.
How to use both disks to swap? Create a swap partition on each disk and add these two swap-partitions to the system?
 
If I will just remove one SSD from my server, will my PVE boot automatically?
If you set both SSDs in BIOS/UEFI as first+second boot priority, then yes.
If I will increase RAM to 64Gb what swap size will be right?
That really depends on what you want to use your swap for. I would use 4 or 8GB swap for that.
How to use both disks to swap? Create a swap partition on each disk and add these two swap-partitions to the system?
Jup. I would put 4GB swap on each disk so your system got 8GB.
 
Jup. I would put 4GB swap on each disk so your system got 8GB.
1) What is better - to make two 4Gb swaps on each disk or make one 8Gb swap on one disk?
2) As I understand now 8Gb swap will be too big for my 16Gb RAM?

That really depends on what you want to use your swap for.
To prevent processes getting killed by the OOM killer in case of running out of RAM.
 
1) What is better - to make two 4Gb swaps on each disk or make one 8Gb swap on one disk?
I would put it on both so you always got some swap even if one disk isn't working.
2) As I understand now 8Gb swap will be too big for my 16Gb RAM?
There is no "too big". Its just wasted space if you assign more swap than you actually need. But unpartitioned space is wasted anyway, so you can also use all space thats left unallocated on the disks and use it for swapping.
 
There is no "too big". Its just wasted space if you assign more swap than you actually need. But unpartitioned space is wasted anyway, so you can also use all space thats left unallocated on the disks and use it for swapping.

I have read in Proxmox manual "Installing Proxmox VE/Advanced LVM Configuration Options"
swapsize
Defines the size of the swap volume. The default is the size of the installed memory, minimum 4 GB and maximum 8 GB. The resulting value cannot be greater than hdsize/8.

I leave about 8Gb unpartitioned space on disks.
What's the best way to do swap:
1) allocate 8 GB for the swap on each disk, then the total swap will be 16 GB (which is more than 8 GB recommended in the manual).
Or
2) allocate 4 GB for the swap on each disk and then the total swap will be 8 GB?
 
Like already said, its about not wasting space not that the system would run worse with more swap. 8+8GB swap will be fine. My PVE host got 64GB swap and I never saw it using more than 600MB of it.
 
Like already said, its about not wasting space not that the system would run worse with more swap. 8+8GB swap will be fine. My PVE host got 64GB swap and I never saw it using more than 600MB of it.
Does it make sense to set the parameter vm.swappiness to less than 60?
For example,
vm.swappiness=10?
 
ZFS pools shouldn't be filled up more than 80% because the more you are over 80% the slower your pool will get until if finally switches into panic mode if you reach 90% where it gets even slower until the pool finally fails because it is using copy-on-write and therefore always needs alot of empty space for operation.
I calculated with your 238GB. If your ZFS partition is only 219GB and 20% should be kept free you only got 175GB for actual data. So if you want to reserve 20GB for PVE + ISO/Templates (here PVE uses 10GB right know without any ISOs or templates) there would be 155GB for guests.

I see a strange thing - sizes of disk spaces are decreased

Now
Code:
root@pve:~# df -h
Filesystem        Size  Used Avail Use% Mounted on
udev              7.8G     0  7.8G   0% /dev
tmpfs             1.6G  1.3M  1.6G   1% /run
rpool/ROOT/pve-1  212G  9.2G  203G   5% /
tmpfs             7.8G   46M  7.8G   1% /dev/shm
tmpfs             5.0M     0  5.0M   0% /run/lock
rpool             203G  128K  203G   1% /rpool
rpool/ROOT        203G  128K  203G   1% /rpool/ROOT
rpool/data        203G  128K  203G   1% /rpool/data
/dev/fuse         128M   16K  128M   1% /etc/pve
tmpfs             1.6G     0  1.6G   0% /run/user/0

two days ago
Code:
root@pve:~# df -h
Filesystem        Size  Used Avail Use% Mounted on
udev              7.8G     0  7.8G   0% /dev
tmpfs             1.6G  1.3M  1.6G   1% /run
rpool/ROOT/pve-1  221G  2.8G  219G   2% /
tmpfs             7.8G   49M  7.8G   1% /dev/shm
tmpfs             5.0M     0  5.0M   0% /run/lock
rpool             219G  128K  219G   1% /rpool
rpool/ROOT        219G  128K  219G   1% /rpool/ROOT
rpool/data        219G  128K  219G   1% /rpool/data
/dev/fuse         128M   16K  128M   1% /etc/pve
tmpfs             1.6G     0  1.6G   0% /run/user/0

Why?

I have not set quotas yet.
 
Does it make sense to set the parameter vm.swappiness to less than 60?
For example,
vm.swappiness=10?
The lower you set it, the less the swap will be used. If you just want it against OOM set it to 0 or 1.
I see a strange thing - sizes of disk spaces are decreased

Now
Code:
root@pve:~# df -h
Filesystem        Size  Used Avail Use% Mounted on
udev              7.8G     0  7.8G   0% /dev
tmpfs             1.6G  1.3M  1.6G   1% /run
rpool/ROOT/pve-1  212G  9.2G  203G   5% /
tmpfs             7.8G   46M  7.8G   1% /dev/shm
tmpfs             5.0M     0  5.0M   0% /run/lock
rpool             203G  128K  203G   1% /rpool
rpool/ROOT        203G  128K  203G   1% /rpool/ROOT
rpool/data        203G  128K  203G   1% /rpool/data
/dev/fuse         128M   16K  128M   1% /etc/pve
tmpfs             1.6G     0  1.6G   0% /run/user/0

two days ago
Code:
root@pve:~# df -h
Filesystem        Size  Used Avail Use% Mounted on
udev              7.8G     0  7.8G   0% /dev
tmpfs             1.6G  1.3M  1.6G   1% /run
rpool/ROOT/pve-1  221G  2.8G  219G   2% /
tmpfs             7.8G   49M  7.8G   1% /dev/shm
tmpfs             5.0M     0  5.0M   0% /run/lock
rpool             219G  128K  219G   1% /rpool
rpool/ROOT        219G  128K  219G   1% /rpool/ROOT
rpool/data        219G  128K  219G   1% /rpool/data
/dev/fuse         128M   16K  128M   1% /etc/pve
tmpfs             1.6G     0  1.6G   0% /run/user/0

Why?

I have not set quotas yet.
Its not that useful to use df -h with ZFS as df can only show filesystems and your ZFS pool only partial consists of filesystems. If you want to see the usage of your pool you should use ZFS commands like zfs list for that that also takes zvol block storages into account.
 
ZFS pools shouldn't be filled up more than 80% because the more you are over 80% the slower your pool will get until if finally switches into panic mode if you reach 90% where it gets even slower until the pool finally fails because it is using copy-on-write and therefore always needs alot of empty space for operation.
I calculated with your 238GB. If your ZFS partition is only 219GB and 20% should be kept free you only got 175GB for actual data. So if you want to reserve 20GB for PVE + ISO/Templates (here PVE uses 10GB right know without any ISOs or templates) there would be 155GB for guests.


Code:
root@pve:~# lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
nvme0n1     259:0    0 238.5G  0 disk
├─nvme0n1p1 259:1    0  1007K  0 part
├─nvme0n1p2 259:2    0   512M  0 part
└─nvme0n1p3 259:3    0 229.5G  0 part
nvme1n1     259:4    0 238.5G  0 disk
├─nvme1n1p1 259:5    0  1007K  0 part
├─nvme1n1p2 259:6    0   512M  0 part
└─nvme1n1p3 259:7    0 229.5G  0 part

Total nvme0n1 is 238.5Gb. nvme0n1p1 is 1 Gb, nvme0n1p2 is 0.5 Gb, nvme0n1p3 is 229.5 Gb

Where is 238.5-1-0.5-229.5Gb = 7.5Gb lost?
 
I would guess its unallocated space which should show up if you run fdisk -l /dev/nvme0n1.
 
Code:
root@pve:~# fdisk -l /dev/nvme0n1
Disk /dev/nvme0n1: 238.47 GiB, 256060514304 bytes, 500118192 sectors
Disk model: NE-256                                 
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: F73F4ECD-7205-47DA-A77F-9E91E960F91F

Device             Start       End   Sectors   Size Type
/dev/nvme0n1p1        34      2047      2014  1007K BIOS boot
/dev/nvme0n1p2      2048   1050623   1048576   512M EFI System
/dev/nvme0n1p3   1050624 482344960 481294337 229.5G Solaris /usr & Apple ZFS
/dev/nvme0n1p4 482347008 500118158  17771151   8.5G Linux swap

/dev/nvme0n1p4 - swap partition I made at unallocated space already
 
ZFS pools shouldn't be filled up more than 80% because the more you are over 80% the slower your pool will get until if finally switches into panic mode if you reach 90% where it gets even slower until the pool finally fails because it is using copy-on-write and therefore always needs alot of empty space for operation.
I calculated with your 238GB. If your ZFS partition is only 219GB and 20% should be kept free you only got 175GB for actual data. So if you want to reserve 20GB for PVE + ISO/Templates (here PVE uses 10GB right know without any ISOs or templates) there would be 155GB for guests.
Where is the default storage Local is situated? At rpool/ROOT?
The backups of VMs are at Local storage.
I think 20G is not enough for it
 
Where is the default storage Local is situated? At rpool/ROOT?
The backups of VMs are at Local storage.
I think 20G is not enough for it
Jup, but it makes no sense at all to store backups on these two SSDs because if your pool would die you would loose the backups together with the VMs/LXCs so the backups would be useless. Therefore backups always should be stored on dedicated internal/external disks where no guests are stored on or even better on a NAS or a remote PBS server.
 
Last edited:
Jup, but it makes no sense at all to store backups on these two SSDs because if your pool would die you would loose the backups together with the VMs/LXCs so the backups would be useless. Therefore backups always should be stored on dedicated internal/external disks where no guests are stored on or even better on a NAS or a remote PBS server.
So maybe is better do not to set quotas for rpool/data and rpool/ROOT separately but set only one quota for the whole rpool instead (as 80% from ZFX disk capacity)?
 
Last edited:
Would also be fine. Just keep in mind that without a quota for your rpool/data nothing will prevent VMs from using all the space available so your root might end up without any available space forcing PVE to switch to read-only mode making PVE unusable. So its not a bad idea to atleast also set a quota for the "data" dataset too.
 
I have to make data storage for media files. Total data size will be growing and I will add new disks to the storage as it will be necessary.

Now I have only one physical server so disks for this storage will be plugged into the server with the PVE host.
And systems in guest VM need access to this storage.
I want to use NFS for it and will install NFS-client on the guest system(s).

I see two options to organize an NFS server:
1) Use the PVE host system as the NFS server also.
2) Make a new container and install the NFS server inside this container.

What option does look better?
 
Both got advantages and disadvantages. I would prefer to to bind-mount a folder from the host to a LXC and then run the NFS server inside the LXC to share it.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!