Do you have numbers on what the performance should be? Without them, you can't decide which VMs are "non latency intensive". Ceph isn't slow by any means, but of course you have the added latency and capacity limit of the network. How much that...
The original issue with QEMU Agent fsfreeze was that it notified VSS about the backup and all applications subscribed to VSS would prepare for it. In the case of SQL Server, it wrongly understood that it had to trim the log and thus broke the...
Space will be preallocated (that is, thin provisioning will be lost) on any non-shared storage if you live migrate the VM due to the fact that QEMU needs to set the source disk in "mirror" state so every write done to the source disk is written...
Ceph docs recommendations are based on simplicity of deployment and the fact that in a pure Ceph cluster you will have dozens or more servers contributing to the overall cluster network capacity. In a typical PVE+Ceph cluster you usually have a...
Hello,
as you already noted Ceph in a small homelab opens a whole can of worms: https://forum.proxmox.com/threads/fabu-can-i-use-ceph-in-a-_very_-small-cluster.159671/
With your current hardware you have basically following options:
- Build a...
I suggest you open a new thread and provide as much information as possible (pveversion -v, qm config VMID, etc). Even if you problem may show similar symptoms probably isn't related as this one got solved in a 6.2 kernel released long ago. The...
We're excited to welcome partimus, a hosting provider from Germany, as our newest official Proxmox Hosting Partner. Partimus is part of the primeline group, together with the primeLine Solutions GmbH, a longstanding Proxmox Gold Partner.
Proxmox...
If you want a supported configuration, use /etc/network/interfaces as currently it's the only supported way to configure the network, not just for the GUI but for other functionalities like Cluster deployment.
IMHO you should adapt the tool...
Appreciate the effort, but giving this kind of script has it's risks. You can receive similar apt errors "attempting to remove proxmox-ve package" for many different reasons. As mentioned, this will never happen if you use the correct PVE...
Slight offtopic (and I might be missing something): If you use a single switch, your cluster will have very reduced availability due to switch being SPOF. Same with that break out cable (SPF cables can fail too).
That drive is dying in a quite peculiar way, although I've seen other weird behaviors like that. Simply backup all data, buy a new drive and ditch the old one. I wouldn't use it for anything besides practicing with broken drives in a lab.
At the...
Keep in mind that in 2 node cluster, if one loses quorum, the other one will lose it too, as it won't have a majority of votes (will have just 1 vote with is exactly 50% of 2 votes total). A 2 node cluster + HA will not provide any...
To me seems that that drive that ends up in DEGRADED state is dying in some funky way that causes the behavior you see. I would make sure you have a backup, remove the failing drive, connect a new one and use zfs replace to resilver it. You could...
I would use QEMU Agent hook scripts instead, so you can run inside the VM which ever time sync command you need when the filesystem is thawed. Some details on [1] and [2].
Out of curiosity: which DB is it? Using Percona, MySQL GTID replication...
If you're going with a routed setup via Openfabric / OSPF, then no bonds should be required - they're probably even detrimental to the whole setup. FRR supports ECMP, so just adding multiple interfaces to the same Openfabric router should already...
Feeling that I'm going to repeat myself a bit too much :), but... that is showing the configured RAM in the VM, not the used RAM. The green area will be drawn regardless of the power state of the VM or if it has ever been powered on. If you power...
HA acts locally on each host and will fence a host if the host loses quorum. To lose quorum, corosync in that host has to decide that none of both link0 and link1 are operating properly (nic link down, switch down, too much jitter, too much...