Hi everyone,
I've recently started facing issues with a large MySQL test database (20TB VM disk) and MySQL replication. Fuller details on the issue on the DB side can be found in another thread, but should be mostly irrelevant to the questions...
If you use OpenVSwicth with MLAG then you can use these settings
auto bond0
iface bond0 inet manual
ovs_bonds eno1 eno2
ovs_type OVSBond
ovs_bridge vmbr0
ovs_options lacp=active bond_mode=balance-tcp
auto vmbr0
iface...
Question 2. solved now as well:
added Proxmox pve-root-ca.pem to CheckMK Trusted Anchor Storage (CheckMK Global Settings -> Trusted certificate authoroties for SSL -> copy .pem cert there). Then re-schedule Check_MK Agent inventory service as...
i create /run/pve as root. the CT startup works. but if i reboot now, the /run/pve dir will disappear. i can guarantee that.
and yes it's an edge case, but a good one to have a single image for X number of hosts. maybe someone can come up with...
well, that is setup that is not officially supported, so there might be some edge cases and we don't have a lot of experience with it.
regarding the /run/pve directory: is there a chance that the permissions are not as expected on some level...
My use case is to share VM to my other PCs. This way i don't have to update 3 OS, but just one.
When the VM is loaded on the other host it uses the disk image from the "main" node.
Well thats another chapter but i never had issues with that.
The...
Two full backup cycles done. It seems that 6.17.4-2 resolved the issue. It seems that Fleecing disks prevented VMs being unresponsive. Thanks to everyone reporting and resolving this issue.
I will be a bit more conservative with new a updates...
This is intended. I have 3 pve hosts and 2 of them are most of the time down. I updated the quorum value accordinly.
But does this have something to do with a missing /run/pve dir?
The problem is that this node is part of a cluster, but cannot connect on any of the both networks to the remaining cluster:
The result is that the service/protocol for the cluster communication is not working and as a result of this, the...
Hello,
I have a 6-node Proxmox VE cluster (version 8.4.14) connected to two Dell ME4024 storage arrays.
Each array has two controllers and is connected to all nodes.
Storage configuration in Proxmox:
SAS1A – Array 1, controller A
SAS1B – Array...
PVE selbst läuft auf einem HPE-Boot-Device. Das ist eine Steckkarte, auf der sich zwei NVMes befinden, die 1:1 gemirrort sind. Dateisystem ist ext4. Das Storage besteht aus SSDs, die als RAID10 unter zfs angelegt sind.
Bzgl. Upgrade auf die 9.x...
If you use OpenVSwicth with MLAG then you can use these settings
auto bond0
iface bond0 inet manual
ovs_bonds eno1 eno2
ovs_type OVSBond
ovs_bridge vmbr0
ovs_options lacp=active bond_mode=balance-tcp
auto vmbr0
iface...
Zwei Hauptgründe: Performance und Softwarelizenzen.
Wenn die CPU-Kerne auf zwei physische CPUs verteilt sind, kann es zu Performanceeinbussen kommen, da die Verbindung (Interconnect) zwischen den beiden CPUs zum limitierenden Faktor werden kann...