Hello,
Now with the last update who include qemu-server, on server formated with xfs the backup file is create on nas but I have a new error message
Warning: unable to close filehandle GEN1566 properly: Input/output error at /usr/share/perl5/PVE/VZDump/QemuServer.pm line 598.
INFO: stopping...
Hello,
My problem concern vms stored on zfs, or xfs and raw disk, on a single server with no cluster. On my cluster + ceph I have no problem with the new compression. On the two other server with no cluster zstd failed
I just try to backup vm with suspend option activate and smae error message
ERROR: Backup of VM 400 failed - zstd --rsyncable --threads=1 failed - wrong exit status 1
Hi,
I have the same problem zstd --rsyncable --threads=1 failed - wrong exit status 1
It's a single server with
pveversion -v
proxmox-ve: 6.2-1 (running kernel: 5.4.41-1-pve)
pve-manager: 6.2-4 (running version: 6.2-4/9824574a)
pve-kernel-5.4: 6.2-2
pve-kernel-helper: 6.2-2
pve-kernel-5.3...
Thank's for your answer, but I solve the problem. I reinitialise /etc/hosts and /etc/hostname with https://pve.proxmox.com/wiki/Renaming_a_PVE_node and after reboot all work normaly.
Hello,
I have a standalone proxmox 6.1 server, and I install the last upgrade, but on reboot have have no web GUI
pveversion -v gave
proxmox-ve: 6.2-1 (running kernel: 5.4.34-1-pve)
pve-manager: 6.2-4 (running version: 6.2-4/9824574a)
pve-kernel-5.4: 6.2-1
pve-kernel-helper: 6.2-1...
I think that I find something in the qemu 4.0 documentation? I modify a clone VM conf file like this
args: -rtc base=localtime,clock=vm,driftfix=none
And it seem to work normaly.
I keep you informed of futher evolution during the week
Hello everybody.
Recently I have migrate my production cluster from proxmox 5 to 6.
My actual version is
pveversion -v
proxmox-ve: 6.0-2 (running kernel: 5.0.18-1-pve)
pve-manager: 6.0-5 (running version: 6.0-5/f8a710d7)
pve-kernel-5.0: 6.0-6
pve-kernel-helper: 6.0-6
pve-kernel-4.15: 5.4-6...
I had made a test backup to proxmox local drive, and the results are nearly identical from ceph to local vs ceph to rackstation. The average speed is 35 MB/s.
I think that the problem come from ceph storage, but I don' t know where is this problem.
When I restore a vm to local and the backup...
The proxmox server are in production with many users, who use multiple KVM Vms, the only modification that I made easily is to put ceph lacp to it's own vlan, but the vzdump is always slow.
We made other test like to restore a Vm to local_LVM and backup it on rackstation, and the average speed...
Hello alexskysilk
First my vzdump are programmed during night, when nobody are at work.
My vzdump is untouched
# vzdump default settings
#tmpdir: DIR
#dumpdir: DIR
#storage: STORAGE_ID
#mode: snapshot|suspend|stop
#bwlimit: 0
#ionice: 1
#lockwait: MINUTES
#stopwait: MINUTES
#size: MB...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.