Proxmox Backup/Migration mit Veeam

Thank's for replay 6equj5!

From my experience, I can tell you that the Veeam 12.3.2.x + ProxMox 8.4.x combination works great, while Veeam 12.3.2.x with ProxMox 9.1.x, on the other hand, sucks.
Basically, it's the worker that doesn't perform well because during migration, it sets the machine version to the highest value ProxMox can accept by default, and this is a problem.
So, with both Linux and Windows VMs, I found many issues resolved by using a ProxMox 8.4.x target (where the maximum VM is 9.2 + PVE1).
Unfortunately, Veeam has discontinued support for the ProxMox plug-in on version 12 because the downloadable plugin is only compatible with version 13. A real shame. It would be enough to be able to modify the final configuration file before performing the restore to solve everything...
I also had a setback: I took a snapshot to make changes to the WinServer VM with Veeam, but when I went to consolidate a few days later, it was no longer possible. I had to delete the VM and restore with a backup made fortunately with PBS (which works very well).
I like Veeam B&R; it's robust even if it's gigantic and requires a lot of resources, but it allows you to selectively recover files. But it's too much of a hassle.
To be honest, over Christmas, I ran a DR test with PBS on the same infrastructure by shutting down the server and using a spare server, and from the last backup, I restored EVERYTHING within a day (it always depends on the size of the VMs).
I'm doing the same thing for a customer with ESXI 7 and Veeam 12.3.2. Only with a proxy configured on the new server did it work fast enough; otherwise, it would take forever.
 
Last edited:
Jetzt muss mir als altem Sack nur noch jemand sagen, wie ich den Post als gelöst schließen kann. Hier scheine ich Tomaten auf den Augen zu haben. :-)
Oben den Titel des Thread bearbeiten und solved auswählen.
 
Die Kirsche auf der Sahnetorte ist, dass das Veeam Plugin beim Full VM Restore statt der Anzahl Cores, die Anzahl Sockel nimmt. Ist bei Veeam v13.0.1 und Proxmox 9.1.1 passiert. Schludrigkeit hoch 3View attachment 95623
Nein, er nimmt in der Regel das, was eingestellt war. Bei vSphere ist das oft Standard, weil die schon 13x die Art wie man es einstellt geändert haben.
P.S. bei 12 vCPUs muss das schon eine Dicke DB VM sein. Gern mal beim migrieren überprüfen ob tatsächlich so viele Cores benötigt werden.
 
12 cores are necessary if there are many processes and flows in a job or multiple running simultaneously, for example, 3 VMs to be backed up in a job with multiple disks. Each core/vCPU is assigned a worker thread. If you have few, the VMs are queued and processed as soon as one becomes available.
I prefer having multiple single VM jobs, also for more streamlined space management on the destination. Otherwise, very large files are created, and in the case of a restore, more resources are needed, as is the repository space, especially on rotating media (RDX cassettes, external USB drives).
It's probably difficult to create a worker VM at the configuration level that can meet the needs of every customer. However, when creating the worker from the Veeam management console, it asks you how many cores, RAM, and flows it must manage.