[SOLVED] PBS backup and restore windows 2019 vm

fernet

Member
Dec 12, 2019
6
0
6
45
The virtual machine in question has windows 2019 standard server all drivers installed virtio and agent installed and consists of 1 system disk 120 gbyte
a 500 gbyte mssql file system disk and a mssql database dump disk. The virtual machine was installed very recently, so it doesn't contain much data. doing a backup test with pbs everything works regularly. Deleting the virtual machine and restoring from the PBS system corrupts the two 500 gbyte disks.as mentioned the two 500 gbyte disks contain very little data.
From windows running chkdsk / F the two disks are reread by the operating system, but they are empty
 
Wanting to add some more details about the environment in which the problem occurs. It is a cluster with 7 ceph nodes and related 42 Intel SSD osd, the problem seems to occur only by performing the restore on the ceph target datastore.
 
I ran other tests and came to the conclusion that the problem is the restore of the vm via PBS on the Ceph datastore. In fact I added a zfs pool, I restored the same vm to the zfs pool and everything was restored correctly.
 
I ran other tests and came to the conclusion that the problem is the restore of the vm via PBS on the Ceph datastore. In fact I added a zfs pool, I restored the same vm to the zfs pool and everything was restored correctly.
Thanks for pointing that out. I have seen this behavior also, on a rare os/boot, it fails to boot if restored to ceph but works restored to local disk. PVE can move it to ceph by 'move disk' and it still works. I use this as a workaround. I have no idea why it fails but it's reproducible.
 
I ran other tests and came to the conclusion that the problem is the restore of the vm via PBS on the Ceph datastore. In fact I added a zfs pool, I restored the same vm to the zfs pool and everything was restored correctly.
Please send your:

> pveversion -v
 
Please send your:

> pveversion -v
proxmox-ve: 6.3-1 (running kernel: 5.4.78-2-pve)
pve-manager: 6.3-3 (running version: 6.3-3/eee5f901)
pve-kernel-5.4: 6.3-3
pve-kernel-helper: 6.3-3
pve-kernel-5.3: 6.1-6
pve-kernel-5.4.78-2-pve: 5.4.78-2
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.18-2-pve: 5.3.18-2
ceph: 14.2.16-pve1
ceph-fuse: 14.2.16-pve1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.7
libproxmox-backup-qemu0: 1.0.2-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-2
libpve-guest-common-perl: 3.1-4
libpve-http-server-perl: 3.1-1
libpve-storage-perl: 6.3-5
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-1
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.0.6-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-4
pve-cluster: 6.2-1
pve-container: 3.3-2
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.1.0-8
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-3
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.5-pve1
 
Hi, after updating all the cluster nodes and doing the backup and restore tests, everything works perfectly. A warm thanks to all the suggestions received and to the precious contribution of those who work on the evolution of the proxmox product
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!