Search results

  1. I

    Backup failed with: unable to connect to VM 105 qmp socket; VM 105 is broken now

    Hallo Forum, heute nacht ist das Backup unseres Mailservers fehlgeschlagen und die VM ist nun auch nach unlock und reboot im status running, es kann aber kein Connect auf der Konsole hergestellt werden ( CPU ist 0/DISK IO ist 0) und die VM ist funktionslos. Es handelt sich um einen älteren...
  2. I

    update ceph-common package on debian client reasonable?

    Hello Forum. I would like to access a cephFS on our hyperconverged PVE cluster running the latest ceph version (19.2.0 squid (stable) with a debian client using the kernel driver. The debian latest 12.8 ceph-common package is version 16.2.15 (618f440892089921c3e944a991122ddc44e60516) pacific...
  3. I

    appropriate configuration for a dedicated migration network

    Hi forum, We want to migrate our running pve-cluster to a newer hardware and thereby improve our cluster network setup. Current status of the new PVE is: All 3 nodes have PVE 8.3.0 installed and most networks are configured...
  4. I

    How to install HP-Linux Management Component Pack (MCP) on Proxmox 7.4

    Hello Forum, due to a recent memory failure I wanted to install the HP-MCP on a HP-DL380 G7 in order to use hp-health for debugging. This failed because the install procedure on the MCP-project page is outdated or rather deprecated (https://downloads.linux.hpe.com/SDR/project/mcp/) The reasons...
  5. I

    Combine a hyper-converged proxmox cluster with a separate ceph cluster - anybody done that?

    Hi Forum, we run a hyper-converged 3 node proxmox cluster with a meshed ceph stroage. Apart form using a switch instead of our meshed setup, we would like to add a connected ceph cluster to expand storage capacities. Has anybody done that? Is such a setup feasible if we want proxmox-vms tp use...
  6. I

    [SOLVED] ceph bonded network one port failed cannot access vms via vnc

    Hello Froum, we are running a hyperconverged 3 node proxmox ceph cluster (PVE 7.4.16, ceph 16.3) we use a meshed 10GBit bonded network for ceph trafic with 3 x dual port intel nics. It seems that one port on node 3 fails because node 1 cannot ping node 3 and reverse but node 2 can ping node 3...
  7. I

    ceph cluster-network outage

    Hello Forum, Lately we had an outage of one network port in our ceph-meshed-cluster-network presumably due to a kernel problem (pls. see syslog below), which finally led to not responding osds. The cluster network is implemented as a dedicated 10Gib bond in...
  8. I

    pve udate6to7 issue: changing MACs on bond and slaves causes ceph-cluster to degrade

    Hello Forum Below I describe severe degrading issues of our 3-node-hyperconverged-meshed ceph-cluster (based on 15.2.13), which happened after the update6to7 and how we were able to resolve it. After updating and rebooting, the slave-nics (ens2 and ens3) of our meshed bond0 showed newly assigned...
  9. I

    pve6to7 update repository confusion caused ceph to go down on updated node

    Hi Forum, we run a hyperconverged 3 node pve-cluster still in eval mode (latest 6.4 pve and ceph 15.2.13) I started an pve6to7-update on the first node of a healthy ceph-cluster with a faulty repository configuration, which I can't exactly recall anymore. In addition I set the noout flag for...
  10. I

    various ceph-octopus issues

    Hello Forum! I run a 3 nodes hyper-converged meshed 10GbE ceph cluster, currently updated to the latest version on 3 identical hp server as a test-environment (pve 6.3.3 and ceph octopus 15.28, no HA,) with 3 x 16 SAS-HDs connected via HBA, 3x pve-os + 45 osds, rbd only, activated...