Search results

  1. D

    Proxmox 6.2-10 CEPH Cluster reboots if one node is shutdown or rebooting

    I found the issue pve-03 has an MTU of 1500 and all others are set to 9000. So configuring pve-03 mtu to 9000 and it works like before.
  2. D

    Proxmox 6.2-10 CEPH Cluster reboots if one node is shutdown or rebooting

    No, first restart pve-03 and then wait round about 30 seconds, then pve-01 and pve-02 restart automatically... It is a little bit anoying and drives me round the bent... cause this is a productive system... If you only restart pve-01 everything ist fine, all other nodes are online. Same happens...
  3. D

    Proxmox 6.2-10 CEPH Cluster reboots if one node is shutdown or rebooting

    The nodes connected to an 10GBe switch with configured trunks. So the cable shouldn't be the problem. Ceph is working without issues only monitors seems to restart nodes.
  4. D

    Proxmox 6.2-10 CEPH Cluster reboots if one node is shutdown or rebooting

    Mmmh, I can repoduce it, reboot pve-01 and pve-02 is not a problem everything just fine but pve-03 causes the issue rebooting pve-01 and pve-02
  5. D

    Proxmox 6.2-10 CEPH Cluster reboots if one node is shutdown or rebooting

    Changing from iface bond1 inet static address 10.10.10.1/24 to iface bond1 inet static address 10.10.10.1 netmask 255.255.255.0 takes no effect, all nodes reboot, same as before. Any other suggestions?
  6. D

    Proxmox 6.2-10 CEPH Cluster reboots if one node is shutdown or rebooting

    Mhhh, really? it is working until the upgrade for mor than 1 year. i do not think it is really the matter, cause proxmox use debian buster under the hood.
  7. D

    Proxmox 6.2-10 CEPH Cluster reboots if one node is shutdown or rebooting

    Here is the network config of pve-01 (same for pve-02/pve-03) # network interface settings; autogenerated # Please do NOT modify this file directly, unless you know what # you're doing. # # If you want to manage parts of the network configuration manually, # please utilize the 'source' or...
  8. D

    Proxmox 6.2-10 CEPH Cluster reboots if one node is shutdown or rebooting

    Hello, after update the proxmox nodes few days ago with latest CEPH-Version, something strange happens. If a node is rebooted, all HA cluster nodes are rebooted. In the log I saw something like that on Node that is not rebooted: Jul 23 13:36:28 hyperx-01 ceph-mon[2793]: 2020-07-23...
  9. D

    ERROR: online migrate failure - VM xx qmp command 'migrate' failed - Nested VMX virtualization does not support live migration yet

    Hello, thanks for the hint, but following line will cause an error: cpu: host,flags=-vmx Error: flags: value does not match the regex pattern vm xxx - unable to parse value of 'cpu' - format error Also the ui does not show all cpu types, as the command: qemu-system-x86_64 -cpu ? So I ended...
  10. D

    ERROR: online migrate failure - VM xx qmp command 'migrate' failed - Nested VMX virtualization does not support live migration yet

    Thanks, but i just want to pass the Intel Xeon Gold 5138 into the VMs. Is ther way to diasble the vmx for vms like an additional parameter Cpu=host -vmx or something else?
  11. D

    Proxmox 6.x: Nested VMX virtualization does not support live migration yet

    Hello, on a 3 node Ceph Cluster with Proxmox 6, I got following error while live migrating vm with cpu type host and turned on nested virtualization for physical nodes. On Proxmox 5.4.x we have no probs. All physical server are the same. On Proxmox 6.x we got following error: start migrate...
  12. D

    [USB-Dongle] DATEV-SmartToken/SIM an VMs durchreichen

    Hallo, benutze einen USBToIp Converter, funktioniert wunderbar. Silex sind hier problemlose Geräte für die Intergration und bestens geeignet für die Datev-Dongels. Habe selbst den SILEX 510 im Einsatz für die Datev Usb Dongels.
  13. D

    ERROR: online migrate failure - VM xx qmp command 'migrate' failed - Nested VMX virtualization does not support live migration yet

    Hello, we migrated a ceph Proxmox cluster from 5.4.1 to 6.0 with 3 nodes. Everything works fine, but live migration no longer works with nested virtualization activated and VMs with CPU type host. The three nodes are the same physical machines, nvmes, cpus, ram and so on. When I migrate a vm...
  14. D

    pve-firewall

    No, I think it is on your proxmox. Go to point datacenter in left Sidebarmenu, select firewall -> Options then check that following is set to: Input Policy -> ACCEPT Output Policy -> ACCEPT
  15. D

    pve-firewall

    Did you turn on In-Policy to Accept at DataCenter's firewall?