Search results

  1. L

    PVE crashes running VMs because it is not checking free host memory before starting VMs

    Im coming from hyper-V and am testing proxmox for a production environment. I have found a big problem with how PVE manages Host/Guest memory which leads to VMs crashing: Tested Scenario 1: Single Host with 128gb RAM Host with 2 Win10 VMs with 96gb memory each in stopped state Starting VM1 -...
  2. L

    [SOLVED] ceph - fail to create OSDs because the requested extent is too large

    3 node cluster with ceph - 3x optane 3x micron 9300 when creating 2 OSD per micron using "lvm batch" I get an error: (ceph v.15.2.8 and v15.2.9 tested) stderr: Volume group "ceph-38ac9489-b865-40cd-bae3-8f80cdaa556a" has insufficient free space (381544 extents): 381545 required. works with...
  3. L

    [SOLVED] ceph - fail to create multiple OSD per drive because the requested extent is too large

    3 node cluster with ceph v15.2.8 3x optane 3x micron 9300w when I create 2 or more OSD per micron using "lvm batch" i get an error. stderr: Volume group "ceph-38ac9489-b865-40cd-bae3-8f80cdaa556a" has insufficient free space (381544 extents): 381545 required. No issues on optane. Error depends...
  4. L

    [SOLVED] VM live migration not working despite cluster being in good condition

    Im testing proxmox and got surprised by the following error: Running a 3 node hyperconverged cluster with ceph. (pve-manager/6.3-4/0a38c56f (running kernel: 5.4.101-1-pve ; last updated on 04.03.2021 nosub-repo) I can clone VMs to the other nodes. I can offline migrate VMs to the other...
  5. L

    [SOLVED] VM only working on the node it was originally installed on - VMs dont work on other nodes

    Im testing proxmox and got surprised by the following error: Running a 3 node hyperconverged cluster with ceph. (pve-manager/6.3-4/0a38c56f (running kernel: 5.4.101-1-pve ; last updated on 04.03.2021 nosub-repo) I can clone VMs to the other nodes. I can offline migrate VMs to the other nodes...