Recent content by Volker Lieder

  1. Volker Lieder

    Performance Issues with CEPH osds

    Hi, sorry for that confusion. Main issue is, that VM n1 freezes in shell and has delays while writing database operations and has high icmp latencys to VMS in the same subnet. Other VMs on the same CEPH, Cluster, Node seems fine. While investigating this issue there was the comparison of two...
  2. Volker Lieder

    Performance Issues with CEPH osds

    Here are some more information while debugging the setup: The cluster consists of 7 PVE Nodes, each 1TB Ram. The VM with problems (icmp delays, shell freezes several times per hour) has the following settings: agent: 1 balloon: 0 boot: order=scsi0;ide2;net0 cores: 16 cpu: host ide2...
  3. Volker Lieder

    Performance Issues with CEPH osds

    Hi, we have dedicated network for ceph and VMs (2 x 10Gbit with LACP each segment) The icmp between pve-nodes is fine, icmp and performance of a vm is only negative on one pve-node, even the vm migrates to another pve-node. Mostly other pings are fine. The VM is a big database server with 700GB...
  4. Volker Lieder

    Performance Issues with CEPH osds

    Hi, we have two clusters, each with a NVME CEPH. On one cluster, when we try some performance tests like : ceph daemon osd.X bench we recieve the following results: Good Cluster: { "bytes_written": 1073741824, "blocksize": 4194304, "elapsed_sec": 0.34241843399999999...
  5. Volker Lieder

    no vlan transport

    Hi, thank you for your reply. We already tested these options without success. We found the main reason yesterday. The master unit of our juniper virtual chassis seems to have had an issue with some ports. We could "fixed" this with changing the routing engine to 2nd member and back again...
  6. Volker Lieder

    no vlan transport

    Hi, we have installed a fresh proxmox 8.1 Server with 4 x 10Gbit NIC and 2 x 1Gbit NIC 2 x 10Gbit as bond for storage 2 x 10Gbit as bond for VMS IP 2 x 1Gbit as bond for Proxmox-MGMT cat /etc/network/interfaces # network interface settings; autogenerated # Please do NOT modify this file...
  7. Volker Lieder

    Problem with bond after Upgrade to 7.4

    Hi, after upgrading our 7.2 node to 7.4 with Kernel pve-kernel-5.15.102-1-pve the node is not able to boot again. It breaks through initiating network tasks on bond0, messages on screen shows problems with bonding links. Our mgmt IP from the node is pingable, but no access to ssh or gui is...
  8. Volker Lieder

    Install 2nd drive for OS

    Is it possible to create on 2nd new drive zfs, copy the content to zfs and extend this to raid1 in further stepp?
  9. Volker Lieder

    Install 2nd drive for OS

    Hi, i have one question. We have a proxmox cluster installed on server with 1 disk for OS (didnt recognized that one was missing on delivery) Now we have installed a second disk to the proxmox pve host and want to add it like a raid1. Current situation: Device Start End...
  10. Volker Lieder

    Hyper-converged PVE and CEPH, Single PVE cluster with multiple CEPH clusters?

    Hi Robert, its possible, even if you already have the other two ceph clusters configured. If you need some assistence on this dont hesitate to contact me with more details. Best regards, Volker
  11. Volker Lieder

    PVE Node "death"

    We could shutdown the VMs, migrate the config and restart them. Powercycle of node also works, we will investigate the issue and observe the behaviour the next days. Best regards, Volker
  12. Volker Lieder

    PVE Node "death"

    Yes, to move configs i know about. The hope was that there is a way to do a livemigration in such a state. Other plan is to shutdown instances and restart them on another node with config move.
  13. Volker Lieder

    PVE Node "death"

    the pyhsical console is also "dead" no further output, no messages in ipmi log