Cure found: systemctl restart pvestatd
Storage icons, however, still has '?' status.
root@flamme:~# systemctl status pvestatd
â pvestatd.service - PVE Status Daemon
Loaded: loaded (/lib/systemd/system/pvestatd.service; enabled; vendor preset: enabled)
Active: active (running) since...
Thank you for your advice.
I tried to "migrate" by restoring from my proxmox backup server. When I powered up the restored server, I got error messages from the /home xfs file system. I had similar problems, when I "migrated" the server from ESXi to proxmox.
My conclusion is, that I must either...
I tried to use the command line migrate with reduced transfer rate, but with same result:
qm migrate 10013 proxmox2 --bwlimit 50000 --migration_type insecure --targetstorage data2 --online yes --with-local-disks yes
...
drive-sata0: transferred 31.5 GiB of 100.0 GiB (31.52%) in 11m 4s...
Hi
I get this kernel error during migrate to another host. This stops the VM and aborts the migrate.
I have migrated the very same VM to this host a few days ago.
Any idea will be appreciated.
syslog:
Nov 14 10:30:21 proxmox3 kernel: [94205.987480] kvm[2288]: segfault at 68 ip...
It works like a charm - only stone on the road was to add the existing storage to the new server, which had to be done manual in /etc/proxmox-backup/datastore.cfg
The purpose is to be able to recover after pve1 with pbs1 has crashed.
Then I want to be able to restore on pve2 with pbs2.
Would that work ? Can pbs2 recognize and understand the .lock and .chunk structure created by pbs1?
It works when using build-in pve backup, but I want to use the nice...
Is it possible for to or more pbs servers to share same backup storage?
I have a PVE cluster with 2 servers and want to install pbs on both these pve servers and use the same NFS mounted NAS for backup server.
By that I want to establish a single failure tolerant solution.
Can to pbs share the...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.