Latest activity

  • S
    indeed, Thank you so much @Lukas <3
  • fiona
    I answered in the bug report.
  • P
    Hi everyone, I've recently started facing issues with a large MySQL test database (20TB VM disk) and MySQL replication. Fuller details on the issue on the DB side can be found in another thread, but should be mostly irrelevant to the questions...
  • aaron
    aaron replied to the thread /run/pve ??.
    A regular Proxmox VE cluster needs the majority of the nodes to be online to be fully functional. With 2/3 nodes down, that is not the case.
  • news
    news reacted to Eduardo Taboada's post in the thread proxmox configuration with Like Like.
    If you use OpenVSwicth with MLAG then you can use these settings auto bond0 iface bond0 inet manual ovs_bonds eno1 eno2 ovs_type OVSBond ovs_bridge vmbr0 ovs_options lacp=active bond_mode=balance-tcp auto vmbr0 iface...
  • D
    DocMAX replied to the thread /run/pve ??.
    also having pve nodes offline shouldn't be an "edge case" at all since you have a WOL option build in
  • A
    Question 2. solved now as well: added Proxmox pve-root-ca.pem to CheckMK Trusted Anchor Storage (CheckMK Global Settings -> Trusted certificate authoroties for SSL -> copy .pem cert there). Then re-schedule Check_MK Agent inventory service as...
  • D
    DocMAX replied to the thread /run/pve ??.
    i create /run/pve as root. the CT startup works. but if i reboot now, the /run/pve dir will disappear. i can guarantee that. and yes it's an edge case, but a good one to have a single image for X number of hosts. maybe someone can come up with...
  • aaron
    aaron replied to the thread /run/pve ??.
    well, that is setup that is not officially supported, so there might be some edge cases and we don't have a lot of experience with it. regarding the /run/pve directory: is there a chance that the permissions are not as expected on some level...
  • D
    DocMAX replied to the thread /run/pve ??.
    My use case is to share VM to my other PCs. This way i don't have to update 3 OS, but just one. When the VM is loaded on the other host it uses the disk image from the "main" node. Well thats another chapter but i never had issues with that. The...
  • R
    Two full backup cycles done. It seems that 6.17.4-2 resolved the issue. It seems that Fleecing disks prevented VMs being unresponsive. Thanks to everyone reporting and resolving this issue. I will be a bit more conservative with new a updates...
  • E
    Eduardo Taboada replied to the thread /run/pve ??.
    From my opinion it has no sense to have a configured cluster with mostly of the nodes down.
  • CarstenMartens
    This also works with the Proxmox Datacenter Manager. I updated the Gist file.
  • D
    DocMAX replied to the thread /run/pve ??.
    This is intended. I have 3 pve hosts and 2 of them are most of the time down. I updated the quorum value accordinly. But does this have something to do with a missing /run/pve dir?
  • aaron
    aaron replied to the thread /run/pve ??.
    The problem is that this node is part of a cluster, but cannot connect on any of the both networks to the remaining cluster: The result is that the service/protocol for the cluster communication is not working and as a result of this, the...
  • D
    DocMAX replied to the thread /run/pve ??.
    OK thanks. So on the bottom line /run/pve comes somewhere from a "service". I will check it.
  • L
    Hello, I have a 6-node Proxmox VE cluster (version 8.4.14) connected to two Dell ME4024 storage arrays. Each array has two controllers and is connected to all nodes. Storage configuration in Proxmox: SAS1A – Array 1, controller A SAS1B – Array...
  • C
    cklahn replied to the thread Terminalserver unter PVE langsam.
    PVE selbst läuft auf einem HPE-Boot-Device. Das ist eine Steckkarte, auf der sich zwei NVMes befinden, die 1:1 gemirrort sind. Dateisystem ist ext4. Das Storage besteht aus SSDs, die als RAID10 unter zfs angelegt sind. Bzgl. Upgrade auf die 9.x...
  • E
    Eduardo Taboada replied to the thread proxmox configuration.
    If you use OpenVSwicth with MLAG then you can use these settings auto bond0 iface bond0 inet manual ovs_bonds eno1 eno2 ovs_type OVSBond ovs_bridge vmbr0 ovs_options lacp=active bond_mode=balance-tcp auto vmbr0 iface...
  • P
    proxuser77 replied to the thread Verteilung von Kernen.
    Zwei Hauptgründe: Performance und Softwarelizenzen. Wenn die CPU-Kerne auf zwei physische CPUs verteilt sind, kann es zu Performanceeinbussen kommen, da die Verbindung (Interconnect) zwischen den beiden CPUs zum limitierenden Faktor werden kann...