we also have this problem. some changes we made had no effect:
virtio-drivers installed from ISO 0.1.149 (hdd, network, baloon, qemu-guest-agent)
at this moment every day our one Win2012R2 VM had no Network and must Hard-Stopped.
we also have a few other Win-Server and Win10/Win7 VMs with...
yesterday i removed an Node from our PVE/CEPH Cluster, but i forgott something:
At first i set all OSDs for this Node to OUT and STOP. After that, shutdown and removed the Node from PVE Cluster (pvecm delnode...)
But now the cluster shows me in CEPH Overview the OSDs from the removed...
today i have the same issue:
i added the 4. node to our cluster and now the cephfs-Storage ist unusable.
In syslog i see the mounting errors:
pvestatd: A filesystem is already mounted on /mnt/pve/cephfs
pvestatd: mount error: exit code 16
all nodes become ceph-Monitors and...
in my old clusters/ servers the disks on iscsi-lvm storages.
for rbd import i have to mount the old storage in my new environment, right?
it is possible to change the vm-id in this way?
do you have an example for me?
thanks a lot
i have an question about migrating:
we have to migrate about 200 VMs from our old PVE-Hosts/Clusters to our new PVE-Ceph Cluster.
it is possible to copy via scp the disk files directly to the ceph storage an how i do it?
thanks a lot
on our new 3-node cluster with fresh ceph-cluster installation we get continously this messages on all 3 nodes in ceph-log.
pveversion: pve-manager/5.4-4/97a96833 (running kernel: 4.15.18-12-pve)
the cluster contains this hosts:
pve-hp-01 (7 OSDs)
pve-hp-02 (7 OSDs)
pve-hp-03 (8 OSDs)...
the hint about probs of ram and/or hdd i've also read in proxmox forum.
and yes - i tryed on an other host and there is the same error, so i've just saved the time with memory testing.
correct, the 3rd hdd is 2TB big and cannot recover. the backup drive is for proxmox an local directory, but an...