The number of OSDs isn't relevant to a pool as long as it is larger then the minimum required by the crush rule. For example, If you have an EC profile of K=8,N=2 rule, you need a minimum of 10 OSDs DISTRIBUTED ACROSS 10 NODES. so 1 OSD per node...
@Johannes S hat die wichtigsten Punkte sehr gut erklärt. Ergänzend zur nachträglichen ID-Korrektur:
Der sauberste Weg ohne temporäre Platte ist zpool replace — du ersetzt die Platte quasi durch sich selbst, aber mit dem ID-Pfad:
# Erst die...
Jo Fachboden und dann quer reinlegen. Habe ich im Testlab mit 3x Lenovo P720 auch so gemacht. 4 kleine Eckwinkel mit Flansch drangeschraubt, dann rutscht auch nix.
Das hängt von der Definition von "ohne Weiteres" ab. Für Dell Workstations gibt es oft ohne weiteres passende Rack/Rail-Montagesätze - die natürlich als Zubehör explizit mitbestellt und bezahlt werden müssen.
Für den Bastler ist möglicherweise...
@UdoB Thank you for your suggestion!
I used the https://pve.proxmox.com/pve-docs/pve-admin-guide.html#pvecm_separate_node_without_reinstall - as a reference and was able to get the Corosync.conf files editable. Then i copied the conf file from a...
The failure domain must never be the OSD.
With failure domain = host you only have one copy or one chunk of the erasure coded object in one host. All the other copies or chunks live on other hosts.
That is why you need at least three hosts for...
As a new user to proxmox (coming from TrueNAS Scale), I made a bunch of mistakes that cost me an aggravating week of time/effort. The two biggest issues I had were:
Not realizing I had to export my TrueNAS zfs pool before importing it into...
Maybe use Datacenter Manger: https://forum.proxmox.com/forums/datacenter-manager-installation-and-configuration.28/ ? Or use backup and restore (for a central/shared location).
EDIT: If you don't have the same storage on multiple nodes then maybe...
First let me say this: I have zero experience in that large scale setups.
It depends on "k+m" of course. It seems to be a good idea to have less than m OSDs on each single node. If you lose one node you should not lose m OSDs, but one less...
That makes sense. For some reason, I was expecting it to set aside at least the minimum ARC amount and reserve it, but it makes sense that it doesn't do that on boot.
I've since seen that system use the full allotment of ARC after running...
Late Reply but one Option might be to mark the File as something that Proxmox VE shouldn't touch.
I do this for /etc/resolv.conf inside a PiHole LXC Container, but I guess with the right Name it would work for anything.
File...
In my understanding usually the failure domain is "host". I need to be able to shutdown/reboot one node for maintenance. And I want everything to stay alive when (not: if) one node has any kind of problem.
You will lose three or four OSDs if any...
Ohne konkrete Zitate der Fehlermeldungen ist eine Antwort schwierig. Was genau wolltest du tun? Was hast du dementsprechend gemacht? Was hattest du erwartet? Was geschah stattdessen? Bitte jeweils den kompletten Befehl und die komplette Antwort...
Both. PVE also counts cache. Htop shows this as yellow but it's hard to see/interpret. See free -h and google something like proxmox memory usage. There's decade old discussions about this.
By default PVE Notifications have a single target called mail-to-root and a single matcher called default-matcher which sends all notifications to the email address defined for root@pam. Go to Datacenter -> Permissions -> Users and open up the...
I think you need to carefully consider what your end goal is. PCIe passthrough is not a good citizen in a PVE cluster, since VMs with PCIe pins not only cannot move anywhere, but also liable to hang the host. if you MUST use PCIe passthrough...
Ja sind sie, leider lässt sich bei einen RAIDZ (anders als beim Mirror) eine Platte nach dem Hinzufügen nicht wieder entfernen. Da geht dann nur der Austausch einer Platte mit replace oder komplettes Zerstören des Pools und neu-machen. Das...
The usual boot process uses the BIOS firmware to read the very first blocks of the operating system. This is before even the "initrd"/"initramfs" is available = "pre-boot".
While boot devices may be local hardware and network devices with some...
I would expect this behavior.
And yes, the ARC only gets actually used when ZFS recognizes relevant read pattern by watching the MRU/MFU counters. (--> "warm up". Newer systems may reload the ARC on boot from disk though...)