Search results

  1. L

    [SOLVED] ceph - fail to create multiple OSD per drive because the requested extent is too large

    Im happy to report that installing pve ceph version 15.2.10 solved the issue.
  2. L

    PVE crashes running VMs because it is not checking free host memory before starting VMs

    I was hoping for this problems severity to be acknowledged. Your answers arent exactly diplomatic, considering I am a potential paying customer. And you apparently expect this potential new customer to be familiar with the proxmox source code and supply a patch. Very strange...since you don't...
  3. L

    PVE crashes running VMs because it is not checking free host memory before starting VMs

    This is a test scenario for testing worst case edge behaviour. I thought i made that clear. The use case is you have dozens or even hundreds of VMs running in a HA-setup. In case of a node failure you want migration to work without putting running VMs in danger. (guest databases are thankful...
  4. L

    PVE crashes running VMs because it is not checking free host memory before starting VMs

    Im coming from hyper-V and am testing proxmox for a production environment. I have found a big problem with how PVE manages Host/Guest memory which leads to VMs crashing: Tested Scenario 1: Single Host with 128gb RAM Host with 2 Win10 VMs with 96gb memory each in stopped state Starting VM1 -...
  5. L

    [SOLVED] ceph - fail to create multiple OSD per drive because the requested extent is too large

    Thanks for the tip with the namespaces. Are the ceph packages in the proxmox ceph repo different than the "official" ceph packages ?
  6. L

    [SOLVED] ceph - fail to create OSDs because the requested extent is too large

    3 node cluster with ceph - 3x optane 3x micron 9300 when creating 2 OSD per micron using "lvm batch" I get an error: (ceph v.15.2.8 and v15.2.9 tested) stderr: Volume group "ceph-38ac9489-b865-40cd-bae3-8f80cdaa556a" has insufficient free space (381544 extents): 381545 required. works with...
  7. L

    [SOLVED] ceph - fail to create multiple OSD per drive because the requested extent is too large

    3 node cluster with ceph v15.2.8 3x optane 3x micron 9300w when I create 2 or more OSD per micron using "lvm batch" i get an error. stderr: Volume group "ceph-38ac9489-b865-40cd-bae3-8f80cdaa556a" has insufficient free space (381544 extents): 381545 required. No issues on optane. Error depends...
  8. L

    [SOLVED] VM live migration not working despite cluster being in good condition

    just solved the problem. mtu size was set to 9000 on the migration network interface instead of default 1500 what kind of program is used to copy the memory state over the ssh tunnel ? dd ? Is there a recommendation for mtu size for the migration traffic because of such an issue like the one I...
  9. L

    [SOLVED] VM live migration not working despite cluster being in good condition

    Just tested it again. Didnt work this time either. I can go into Datacenter/node1/shell and enter: ssh 172.16.11.62 -l root and it connects to node2 without a problem I can go into Datacenter/node2/shell and enter: ssh 172.16.11.61 -l root and it connects to node1 without a problem...
  10. L

    [SOLVED] VM live migration not working despite cluster being in good condition

    Thanks Dominik. Yes I know that about shared storage, but the config file is copied over ssh isnt it ? firewall on all nodes off. no external firewall should be able to interfere because the traffic is not leaving the network so no gateway is used. since I can connect from my pc to each cluster...
  11. L

    [SOLVED] VM live migration not working despite cluster being in good condition

    Thanks for your reply. Will look into it but dont think so. Have installed proxmox fresh and havent removed or added another node afterwards. Atm i dont understand why cloning and offline migration work (is that not done over ssh too ?) but live migration isnt.
  12. L

    Freez issue with latest Proxmox 6.3-4 and AMD CPU

    Just started testing proxmox and cannot recreate the error myself, since Im haunted by even more serious proxmox issues and have to try to resolve them first. (see my threads if you wanna help) But could this be connected to a power state issue ? Have you tried deactivating global c states in...
  13. L

    [SOLVED] VM live migration not working despite cluster being in good condition

    Im testing proxmox and got surprised by the following error: Running a 3 node hyperconverged cluster with ceph. (pve-manager/6.3-4/0a38c56f (running kernel: 5.4.101-1-pve ; last updated on 04.03.2021 nosub-repo) I can clone VMs to the other nodes. I can offline migrate VMs to the other...
  14. L

    [SOLVED] VM only working on the node it was originally installed on - VMs dont work on other nodes

    Im testing proxmox and got surprised by the following error: Running a 3 node hyperconverged cluster with ceph. (pve-manager/6.3-4/0a38c56f (running kernel: 5.4.101-1-pve ; last updated on 04.03.2021 nosub-repo) I can clone VMs to the other nodes. I can offline migrate VMs to the other nodes...