Proxmox newbie: Migration from ESXi

starbetrayer

New Member
Oct 18, 2025
1
0
1
Dear all,

I am writing this post to ask for input from the community about my next step to migrate my current server from ESXi to Proxmox, and especially for hard drives. The main reason I am migrating is that I cannot passthrough my GPU Nvidia Geforce 1080 to a windows 11 VM in ESXi. It just crashes.

I spinned off a VM of Proxmox,in my current ESXi host, and have been playing with it to understand the new system.

My motherboard is an Asrock B550M Pro4.

Based on everything that I read, you need to have a hard drive, preferably a NVME hard drive for the Proxmox system. You also need to disable the cluster services so that it doesn't hammer your NVME drive where the proxmox OS resides.

What are the recommendation for a Proxmox NVME? Would a 500 GB samsung 970 evo be sufficient (paying attention to the TBW. Is 300 TBW sufficient or does the value need to be higher?
What other recommendation would you have for the NVME? Other brands?
Concerning the file system for the Proxmox drive, is btrfs ok or is ZFS a must?

I have an existing Samsung SATA 870 EVO, and I was thinking about buying a second one to double it to be able create a ZFS pool mirrored, as LVM-thin.
My understanding is that the recommendation is to have the VMs and Isos stored on that pool.

Am I correct in saying this?

I read also that people are using spinners for backup.
What are the recommendations best practice here?
How are the backups managed in Proxmox?

I saw that there is a tool in Proxmox to connect to ESXi to import the VMs directly.
Can the VMs be put temporarily on the NVME of the Promox system and then moved to the LVM-thin pool?

I am looking for recommendations on how to proceed.
 
Based on everything that I read, you need to have a hard drive, preferably a NVME hard drive for the Proxmox system. You also need to disable the cluster services so that it doesn't hammer your NVME drive where the proxmox OS resides.
No, that's the wrong workaround for a well known symptom. The correct (and possibly more expensive) solution is to use "Enterprise Class, with PLP" devices for ZFS. (And probably for all CoW filesystems.)

PLP will give you two things: endurance and performance in terms of IOPS for "sync writes".

Concerning the file system for the Proxmox drive, is btrfs ok or is ZFS a must?
I can't answer, I am using ZFS exclusively.

buying a second one to double it to be able create a ZFS pool mirrored, as LVM-thin.
"LVM-thin" is a different technology, it has nothing to do with ZFS. Probably you mean "thin provisioned", which is the default for ZFS.

My understanding is that the recommendation is to have the VMs and Isos stored on that pool.
You can have as many (ZFS-) pools as you want. Usually the limitation in a home lab is the number of devices that can get physically connected. The recommendation is still to separate the OS from VM storage from user data. But it is absolutely fine to have only one single ZFS pool and to put everything into it. It works fine for me in Mini-PC with only two NVMe or two SSD. When I have two NVMe and two SSD I do separate stuff.

I read also that people are using spinners for backup.
Yes; and it is not recommended. But because of the costs of large solid state devices "everybody" does it.

What are the recommendations best practice here?
Use PBS. My systems have ZFS only. Rotating rust should be organized in (multiple) mirrors and it requires (my personal recognition) some supporting SSDs in form of a (mirrored) "Special Device".

How are the backups managed in Proxmox?
Automatically per "Backup Job" of course. What is your question?

Have fun :-)