Hi,
while moving a disk on a running VM on pve 5.4, it got stuck reproduceable when moving from nfs to rbd. This is NOT the case when moving the disk vice versa.
If I move the disk from a shutdown VM it works.
create full clone of drive scsi2 (bkp-1901:111/vm-111-disk-1.raw)
drive mirror is...
Hi,
after adding a second nfs-storage to the cluster, this storage fails after exactly 30 minutes. Cause the mountpoint /mnt/pve/nfs2 vanished, but still listed in the output of mount. This is reproduceable. The first nfs storage isn't affected at all. The second nfs server got the same...
Hello Community,
after upgrading one node from 5.1.to 5.3 in a 5 node cluster, I can't do live migration of VMs. Now I'm stuck and in a bad situation, because I want to do a node by node upgrade of the cluster and I can't get the other nodes free...
Here is the migration log of migration from...
Hello Community,
after backing up (proxmox backup function) about 120VMs in a 4-Node-Cluster via NFS, a few (~20)VMs shows the HA in error state. The affected VMs still running fine and there was no trouble at all while running the backup. All VMs are managed by HA.
Here are the notifications...
Hi Community,
to create a rbd image of 1T with an object size of 16K is easy. I did it like this:
rbd create -s 1T --object-size 16K --image-feature layering --image-feature exclusive-lock --image-feature object-map --image-feature fast-diff --image-feature deep-flatten -p Poolname vm-222-disk-4...
Hi,
there are plenty of posts about clock skew issues within this forum. I'm affected too.
So, I've tried different actions to get 4 Nodes with identical hardware permanently in sync with no success.
Even this post...
Hi,
in a HA enviroment, a mass migration don´t honor the parallel jobs setting in GUI.
This is really dangerus, because parallel live migration of >40 VMs saturated the cluster network which ended up in a dead cluster.
Is there a way to avoid this scenario like restrictions of parallel jobs...
Hi,
I'm going to migrate our cluster from HDDs to SSDs and from filestore with SSD-journal to bluestore. Not a big deal with plenty of time...
Unfortunately the pg_num was set to 1024 with 18 OSDs. Afaik this is not a good value, because if one node with 6 OSDs fails, the cluster will be...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.