I've just had to restore some VMs on PVE due to power outage creating some corruption on my local-lvm array.
When I restored the PBS VM from a recent backup, I noticed a lot of breakages (since fixed), with apparmor not allowing dhclient, python...
Noch eine Kleinigkeit, wenn du schon mit alter Server-HW unterwegs bist:
Höchste Vorsicht vor älteren Server-/Enterprise-Grade SAS und/oder SATA SSDs. Die Teile enthalten oft größere (Elektrolyt-basierte) Pufferkondensatoren. Gegen...
Noch eine Kleinigkeit, wenn du schon mit alter Server-HW unterwegs bist:
Höchste Vorsicht vor älteren Server-/Enterprise-Grade SAS und/oder SATA SSDs. Die Teile enthalten oft größere (Elektrolyt-basierte) Pufferkondensatoren. Gegen...
Subject: Migration failed: Storage 'local-lvm' (LVM-Thin) not available on target node
Message:
Hi everyone,
I am trying to migrate an LXC container from my main node (pve) to a second node (minipve).
Source Node (pve): Has local-lvm (LVM-Thin...
Then how do people have like > 300 OSDs???
Surely they're not having 300+ nodes too.
Gotcha.
Therefore; with four nodes, the most that I would be able to do would be a (3,1) EC then, correct?
(I've only been running a very tiny (2,1) EC with...
Moinsen!
Ja, dass die Hardware durchaus alt ist, weiß ich und so hatte es mir mein Arbeitskollege ja auch überlassen (O-Ton: "Wenn du den Server nicht mehr brauchst, entsorg' ihn bitte auf dem Restehof!")....
Momenten ist's nur - finde ich...
That makes sense. For some reason, I was expecting it to set aside at least the minimum ARC amount and reserve it, but it makes sense that it doesn't do that on boot.
I've since seen that system use the full allotment of ARC after running...
I would expect this behavior.
And yes, the ARC only gets actually used when ZFS recognizes relevant read pattern by watching the MRU/MFU counters. (--> "warm up". Newer systems may reload the ARC on boot from disk though...)
Not really. If one uses the WebUI or follows the documentation the no-sub repository gets added with http instead of https, so no manual intervention needed:
https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysadmin_no_subscription_repo...
In my understanding usually the failure domain is "host". I need to be able to shutdown/reboot one node for maintenance. And I want everything to stay alive when (not: if) one node has any kind of problem.
You will lose three or four OSDs if any...
from understanding failure domains. damn @UdoB beat me to the punch. I wont "professor" you on this. You can either read and understand, or deploy your preconcieved notions and learn on your flesh and blood. I would also note that if your...
Selbst SAS SSDs (bei der Nennung von "10K" werden es hier HDDs sein) brauchen ordentlich Leistung. Aber das ist nur ein weiteres Teilproblem. Aus heutiger Sicht ist das genannte System halt einfach verschwenderisch, was die Energieeffizienz...
Replacing a Failed Disk in Proxmox (BTRFS RAID1 – Simple Explanation)
If one disk fails in a Proxmox BTRFS RAID1 setup, your server will still boot from the other disk. That’s normal and expected.
Sometimes it boots in “read-only mode.” This is...
… As long as I've been using Proxmox, and as often as I've had to tinker with the /etc/network/interfaces file to configure bonds and VLANs and everything else, you'd think I'd have realized it can actually be used to configure all of this sort...