Proxmox 8.1 - Weird restore actions

Trumanp014

New Member
Apr 26, 2023
2
0
1
Ok, here's something that is boggling my mind, not hard to do though.

So recently my VM's started getting really slow, lagged.

I am running a dual xeon E5-2698 v3 server, with 256gb ram, it's a Hauwei server with a built in Avago raid controller. I configured the boot drive in hardware as a mirrored 500gb Samsung SSD, and then had 4 cheapo 2tb SSD's in a ZFS RAIDZ array, it ran really well for like 6 months that way until I noticed the latency.

zpool status shows everything ok, I had backups so I just wiped the pool, set it back up, and now when I restore a backup to the pool, or even a single drive set up and it appears to copy the VM image to the drive, but it doesn't restore the config file to /etc/pve/qemu-server folder, kind of just a minimal config file. Now when I check the config from the backups before restore, it looks like it should.

Now here is where my mind goes crazy.

I do a restore of the VM to an NFS share on my Truenas, and it restores the config file properly and boots fine.

The system boot drive is on the same controller, I even reinstalled Proxmox to see if there was something there. Different drives, I had a new Samsung 870 that I stuck into a bay, moved it to a bay I had never used, tried a 1tb WD HDD I had running in another system.

At this point I tend to think hardware, but I can reaad/write to the drives on the controller and the system itself runs rock solid. General drive tests come back as normal too.

So has anyone else ever had this happen? I'm tempted to buy a new raid controller as that is the only thing I can think of at this point. But thought I would throw this out there and see if anyone else ever had this problem.
 
4 cheapo 2tb SSD's in a ZFS RAIDZ array
Really bad idea.
Raidz is pretty terrible for storing VMs. You either have to increase the Volblocksize so running DBs will suck or padding overhead will waste tons of space. Also won't help with IOPS performance as this scales with number of vdevs and not number of disks.
And consumer SSDs should be avoided with ZFS. They can't handle the load and wear well.

And don't run ZFS on top of raid controllers: https://openzfs.github.io/openzfs-docs/Performance and Tuning/Hardware.html#hardware-raid-controllers
 
While I appreciate the insights, I had the 4 SSD's running in jbod mode, the controller I have will do mixed modes.

It still doesn't answer why restores work fine if I restore them to a NFS Truenas share (Has a 10gb connection) yet when I try to restore to any local drive, doesn't matter if it's a raidz, or a single drive configuration it bombs when restoring the conf file for the new location.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!