Moving from ESXi to Proxmox - Ironing the kinks

xcj

Member
Jul 6, 2021
34
7
13
48
Hi there,

I recently moved from an HP Microserver Gen8 running ESXi to two Lenovo M900 Tiny PC's Running ESXi and Vcenter and finally two the same M900 Tiny running proxmox in a cluster with replication and HA for some VM's (using a Rasperry PI as a Qdevice), this setup is a home LAB, but it runs some critical services to me PFsense and DNS.

Both M900 have identical hardware:
1 Core I7 6700T
24GB of RAM
1 USB 3.1 32GB drive (operating at USB 3.0 speed) for proxmox itself (boot drive)
1 512GB SATA SSD M2 (i tried NVME but they overheat in this computer, apparently it is a know issue)
1 4TB 2.5" hard drive.

Both the SSD and the HDD are running ZFS filesystems, both with the exact same name on both hosts.

I have a few issues to iron out, get opinions.

  1. Using USB drive for a boot drive, this a common practise for ESXi, but i think i read that for Proxmox these USB drives will eventually "die" because of the amount of writes performed is well above USB drives specs - Do you guys confirm this?
  2. Amount of RAM used, ESXi used about 900Mbytes for itself, and the rest of the RAM used was about the total amount of RAM used by the SUM of all VM's, but Proxmox is using all the RAM all the time, there was a user that found that it was due to the ZFS cache (https://forum.proxmox.com/threads/host-memory-usage.60124/) if i run the same command mentioned in this post and my RAM usage drops to the amount of RAM the VM's use but in time if grows back, also when i'm running backups the SWAP goes to 99% - Is there anyway to control the amount of RAM the ZFS cache can use? Is there real performance or reliability benefits in ZFS cache? Can it be turned off/opt out?
  3. IO Delay rises to 30% sometimes higher when running backups to a NFS Share (may be related to 2) - How could i troubleshoot why this happens? I run the backups in the middle of the night, but for some reason i am awake when they run, you really feel the impact of performance on the VMs
  4. In ESXi i used Veeam Backup for the VM's and it had incremental backups that kept the VMs backups current and used very little space beyond what the VM's used. Proxmox Backups make a full image of the VM used disk space every time, i can only control how many backups i want to hold for each VM - Is there any incremental backups solution that doesn't cost any money?
  5. I am trying to add USB 2.5G Ethernet adapters and when i connect them i get my syslog flooded (i'm asking for help here - https://forum.proxmox.com/threads/usb-ethernet-2-5gbit-problem.92545/ ) - Any further inputs would be appreciated on that thread. Fixed, all details on the linked thread!
The list of issues may sound like complaining, but it isn't, it's just the things that need fixing, in general i am liking Proxmox a lot and it already does much more than VMWare can do without further hardware and cost.
 
Last edited:
Using USB drive for a boot drive, this a common practise for ESXi, but i think i read that for Proxmox these USB drives will eventually "die" because of the amount of writes performed is well above USB drives specs - Do you guys confirm this?
Yes, normal USB flash drives will not live long because they are used as a regular drive not just a disk to read the system from during boot. Alternatively, you can just install Proxmox VE on the internal drive itself and also store the VMs there. If you select ZFS for it (as you did) you have all the flexibility on how the free space is used as within a ZFS pool, all datasets share the same free space.

If you do want to keep the OS install separate, consider getting an external SSD or HDD instead of a flash drive.

Amount of RAM used, ESXi used about 900Mbytes for itself, and the rest of the RAM used was about the total amount of RAM used by the SUM of all VM's, but Proxmox is using all the RAM all the time, there was a user that found that it was due to the ZFS cache (https://forum.proxmox.com/threads/host-memory-usage.60124/) if i run the same command mentioned in this post and my RAM usage drops to the amount of RAM the VM's use but in time if grows back, also when i'm running backups the SWAP goes to 99% - Is there anyway to control the amount of RAM the ZFS cache can use? Is there real performance or reliability benefits in ZFS cache? Can it be turned off/opt out?
Do you run into problems? ZFS will need a bit of RAM for itself. If there is RAM free, it will by itself use UP TO 50% of the total system RAM for the read cache. It will also free up that RAM if it is needed. Now, if you do run into some issues that a VM will not boot due to RAM issues, it could be that ZFS is not fast enough in releasing the RAM, in such situations you can consider limiting the RAM usage of ZFS. Otherwise, using RAM that is not used by anything else as read cache is a perfectly fine use case.

To check the current size of the ZFS cache (ARC) you can run arcstat.
  1. IO Delay rises to 30% sometimes higher when running backups to a NFS Share (may be related to 2) - How could i troubleshoot why this happens? I run the backups in the middle of the night, but for some reason i am awake when they run, you really feel the impact of performance on the VMs
How fast ist the NFS share?
In order to back up a consistent state of the VM disk, the backup will start to the backup at the beginning of the disk, but will intercept any write operation by the VM and will back up that block (out of order) first before it allows the write operation down to disk. This definitely has an impact on the performance and depends to some degree on how fast the backup can be written.
In ESXi i used Veeam Backup for the VM's and it had incremental backups that kept the VMs backups current and used very little space beyond what the VM's used. Proxmox Backups make a full image of the VM used disk space every time, i can only control how many backups i want to hold for each VM - Is there any incremental backups solution that doesn't cost any money?
Take a look at the Proxmox Backup Server (PBS) which stores backups in a deduplicated manner. If the VM hasn't been shutdown between backups, it will only back up changed parts of the disks and will use the already existing chunks for the untouched areas of the disk. Thus, you get fast incremental backups while each backup on the PBS is a full backup that references the chunks needed for a full restore. Multiple backups of the same or even different VMs can reference the same chunk, resulting in a good deduplication rate.
 
Hi , thanks for all the input,and sorry for the lack of reply on my end!

I'v been focused on #5, that is now fixed!


#1 Regarding the USBs system drives, from what i understood i can't backup the host system itself, just the VM's, right?

Is there anyway to image the USB drive and scp the file to an NFS share in a cronjob? if the USB dies i would like to replace it with a new USB drive but not have to reinstall and redo everything again.

#2 For now i will leave ZFS max_arc as is, i'v been thinking about adding more RAM to both nodes, and after i do that i will reavaluate.

#3 Regarding the current NFS Share it's on an old Dlink NAS, it's has a Gigabit NIC and it's on a mechanical hard drive. It can do about 70 to 80Mbyte/s.

#4 I will try the proxmox backup server running on a VM on the cluster.

#5 Fixed!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!