The failure domain must never be the OSD.
With failure domain = host you only have one copy or one chunk of the erasure coded object in one host. All the other copies or chunks live on other hosts.
That is why you need at least three hosts for...
As a new user to proxmox (coming from TrueNAS Scale), I made a bunch of mistakes that cost me an aggravating week of time/effort. The two biggest issues I had were:
Not realizing I had to export my TrueNAS zfs pool before importing it into...
Maybe use Datacenter Manger: https://forum.proxmox.com/forums/datacenter-manager-installation-and-configuration.28/ ? Or use backup and restore (for a central/shared location).
EDIT: If you don't have the same storage on multiple nodes then maybe...
First let me say this: I have zero experience in that large scale setups.
It depends on "k+m" of course. It seems to be a good idea to have less than m OSDs on each single node. If you lose one node you should not lose m OSDs, but one less...
That makes sense. For some reason, I was expecting it to set aside at least the minimum ARC amount and reserve it, but it makes sense that it doesn't do that on boot.
I've since seen that system use the full allotment of ARC after running...
Late Reply but one Option might be to mark the File as something that Proxmox VE shouldn't touch.
I do this for /etc/resolv.conf inside a PiHole LXC Container, but I guess with the right Name it would work for anything.
File...
In my understanding usually the failure domain is "host". I need to be able to shutdown/reboot one node for maintenance. And I want everything to stay alive when (not: if) one node has any kind of problem.
You will lose three or four OSDs if any...
Ohne konkrete Zitate der Fehlermeldungen ist eine Antwort schwierig. Was genau wolltest du tun? Was hast du dementsprechend gemacht? Was hattest du erwartet? Was geschah stattdessen? Bitte jeweils den kompletten Befehl und die komplette Antwort...
Both. PVE also counts cache. Htop shows this as yellow but it's hard to see/interpret. See free -h and google something like proxmox memory usage. There's decade old discussions about this.
By default PVE Notifications have a single target called mail-to-root and a single matcher called default-matcher which sends all notifications to the email address defined for root@pam. Go to Datacenter -> Permissions -> Users and open up the...
I think you need to carefully consider what your end goal is. PCIe passthrough is not a good citizen in a PVE cluster, since VMs with PCIe pins not only cannot move anywhere, but also liable to hang the host. if you MUST use PCIe passthrough...
Ja sind sie, leider lässt sich bei einen RAIDZ (anders als beim Mirror) eine Platte nach dem Hinzufügen nicht wieder entfernen. Da geht dann nur der Austausch einer Platte mit replace oder komplettes Zerstören des Pools und neu-machen. Das...
The usual boot process uses the BIOS firmware to read the very first blocks of the operating system. This is before even the "initrd"/"initramfs" is available = "pre-boot".
While boot devices may be local hardware and network devices with some...
I would expect this behavior.
And yes, the ARC only gets actually used when ZFS recognizes relevant read pattern by watching the MRU/MFU counters. (--> "warm up". Newer systems may reload the ARC on boot from disk though...)
Since your nodes are in a Proxmox cluster, SSH keys are already exchanged between them.
That makes this pretty painless.
SSH shutdown from node 1's NUT script, Node 1 already has a working NUT client, so you just add a script that SSHs into node...
Es gibt natürlich zahlreiche valide Ansätze. Ich verwende (im Homelab) "Zamba": https://github.com/bashclub/zamba-lxc-toolbox. Damit bekommt man einen ausgereiften AD-kompatiblen Fileserver, der für die Windows-Nutzer beispielsweise die...
Yes, that's the culprit.
I've been there once. The lesson I learned was to only modify the structure of a cluster when all nodes are online :-)
The workaround is to make corosync.conf editable. As that node has not quorum you need to mount the...
Cool, thanks! I guess that's exactly what we'll need. And when it gets integrated into PDM, even better. HA is not something that we're after, so that's not a limitation. The only thing that we'll need to keep in mind is to update the replication...
You could use pve-zsync but it doesn't allow auto-failover aka HA:
https://pve.proxmox.com/wiki/PVE-zsync
So you would need to launch the vms on your offsite cluster manually in case of a failover event.
The Datacenter-Manager doesn't have it...