Search results

  1. K

    [SOLVED] moving disk got stuck from nfs to rbd

    Hi, while moving a disk on a running VM on pve 5.4, it got stuck reproduceable when moving from nfs to rbd. This is NOT the case when moving the disk vice versa. If I move the disk from a shutdown VM it works. create full clone of drive scsi2 (bkp-1901:111/vm-111-disk-1.raw) drive mirror is...
  2. K

    NFS Storage mountpoint vanished

    Hi, after adding a second nfs-storage to the cluster, this storage fails after exactly 30 minutes. Cause the mountpoint /mnt/pve/nfs2 vanished, but still listed in the output of mount. This is reproduceable. The first nfs storage isn't affected at all. The second nfs server got the same...
  3. K

    [SOLVED] live migration not working after upgrade 5.1->5.3

    Hello Community, after upgrading one node from 5.1.to 5.3 in a 5 node cluster, I can't do live migration of VMs. Now I'm stuck and in a bad situation, because I want to do a node by node upgrade of the cluster and I can't get the other nodes free... Here is the migration log of migration from...
  4. K

    HA-Status in error state after backup

    Hello Community, after backing up (proxmox backup function) about 120VMs in a 4-Node-Cluster via NFS, a few (~20)VMs shows the HA in error state. The affected VMs still running fine and there was no trouble at all while running the backup. All VMs are managed by HA. Here are the notifications...
  5. K

    [SOLVED] ceph trouble with non-standard object size

    Hi Community, to create a rbd image of 1T with an object size of 16K is easy. I did it like this: rbd create -s 1T --object-size 16K --image-feature layering --image-feature exclusive-lock --image-feature object-map --image-feature fast-diff --image-feature deep-flatten -p Poolname vm-222-disk-4...
  6. K

    [SOLVED] ceph clock skew issue - no way out?

    Hi, there are plenty of posts about clock skew issues within this forum. I'm affected too. So, I've tried different actions to get 4 Nodes with identical hardware permanently in sync with no success. Even this post...
  7. K

    [SOLVED] HA mass migration

    Hi, in a HA enviroment, a mass migration don´t honor the parallel jobs setting in GUI. This is really dangerus, because parallel live migration of >40 VMs saturated the cluster network which ended up in a dead cluster. Is there a way to avoid this scenario like restrictions of parallel jobs...
  8. K

    Ceph PGs <75 or >200 per OSD ist just a warning?

    Hi, I'm going to migrate our cluster from HDDs to SSDs and from filestore with SSD-journal to bluestore. Not a big deal with plenty of time... Unfortunately the pg_num was set to 1024 with 18 OSDs. Afaik this is not a good value, because if one node with 6 OSDs fails, the cluster will be...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!