Same story on 2 different PVE hosts with Windows Server 2016 Standard and Windows Server 2019 Standard
Windows Server 2016 Standard
root@pve-node2:~# cat /etc/pve/qemu-server/204.conf
agent: 1
boot: c
bootdisk: scsi0
cores: 12
cpu: host,flags=+pcid;+spec-ctrl;+pdpe1gb;+hv-tlbflush
ide2...
We have been facing strange behavior with VLAN tagged connections inside VM with ifupdown2 package been installed on host. After removing this package from PVE and host reboot everything started working as before (and expected)
Shouldn't the following wiki tutorial be updated with respect to Ceph 15.x? After the upgrade main and backup PVE and CEPH clusters to 6.3/15.2.6 mirroring stopped working(
https://pve.proxmox.com/wiki/Ceph_RBD_Mirroring
Tomas, could you please check/confirm that with one way mirroring next command:
rbd mirror pool status rbd --verbose
gives normal output running on backup node:
root@pve-backup:~# rbd mirror pool status rbd
health: OK
images: 18 total
18 replaying
and warning running on main cluster...
I had to down Replica/min to 2/1 from 3/2 to get some "extra space". Any ideas why journaling data are not wiped after pulling from backup node?
If I was not mistaken I did one-way mirror. How could I check that?
Thanks
After an upgrade to PVE 6 and CEPH to 14.2.4 I enabled pool mirroring to independent node (following PVE wiki)
From that time my pool usage is growing up constantly even-though no VM disk changes are made
Could anybody help to sort out where my space is flowing out?
Pool usage size is going to...
According to CEPH docs (https://docs.ceph.com/docs/master/rados/configuration/network-config-ref/#id1) several public nets could be defined (useful in case of rdb mirroring when slave CEPH cluster is located in separate location or/and monitors need to be created on different network...