The number of OSDs isn't relevant to a pool as long as it is larger then the minimum required by the crush rule. For example, If you have an EC profile of K=8,N=2 rule, you need a minimum of 10 OSDs DISTRIBUTED ACROSS 10 NODES. so 1 OSD per node...
Not really. If one uses the WebUI or follows the documentation the no-sub repository gets added with http instead of https, so no manual intervention needed:
https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysadmin_no_subscription_repo...
Ich erinnere mich schwach an 5-8J alte Diskussionen, die Nummer andersherum noch eleganter aufzuziehen.
Nämlich aus einem PBS, eben dem "Zwischenserver", Rawfiles auf Offline-Medien schreiben zu können.
Würde ich sogar ohne Schmerzen per Script...
Kernel is 6.17.9 or something like that- latest from pve-no-subscription
bcachefs is DKMS from Kent's repository, updated daily.
Quote from Kent's Assistant (ProofOfConcept):
I have the same issue and still looking for a soltuion. IT doesn't happen in my Vmware enviroment same set up so it Proxmox related. I'll keep looking and I'll post here if I get it resolved
Typical setup is
INTERNET/WAN <-----> Firewall <----> PMG <----> Mail server
So with your firewall just create a NAT rule that routes port 587 to your Mail Server, not your PMG. This makes sense anyways because PMG just checks incoming and...
Sorry for the late reply.
I would like to have a total of 2TB of usable space, and performance isn't crucial since most services I run are for personal use.
Regarding the RAID controller, I checked and unfortunately the Dell PERC 6/i doesn't...
Dieser Thema hat mich auch einige Stunden gekostet. Zum Glück bin ich nun auf diesen Hinweis gestoßen - Jetzt funktioniert endlich wieder der Netzwerk Boot.
DANKE!
danke, das ging. Ich hatte bei noch laufendem restore versucht die Maschine zu bearbeiten. Der restore ist erfolgreich durchgelaufen!
Deshalb ging auch kein pct unlock, weil der restore noch am laufen war.
Supper, da fällt mir ein Stein vom...
Ah, I misunderstood then, as you said "This is happening also on normal 2K pages.".
I don't see any patches in the kernel around this area between the latest versions, so would be surprised that this is fixed by a kernel upgrade. Oh well, good...
@pmvemf did you ever find out / fix the issue with your setup? We have a very similiar setup (msa2060, multple nodes, 25gb links) and are facing the same issues, including the response from HP (unsupported setup) and others (network related...
Many thanks @tomsie1000 for the detailed write-ups! Sent a new revision of the patch series to also enroll the 2023 KEK: https://lore.proxmox.com/pve-devel/20260223152556.197761-1-f.ebner@proxmox.com/T/
Only 1G hugepages.
But recently i do not see those anymore. Upgraded kernel and changes few settings async io: threads (previous on uoring i had this issues)
Many thanks @tomsie1000 for the detailed write-ups! Sent a new revision of the patch series to also enroll the 2023 KEK: https://lore.proxmox.com/pve-devel/20260223152556.197761-1-f.ebner@proxmox.com/T/
Die Allocated Pages bestätigen das Bild: 1.673.851 Pages × 4 MiB (Standard-Pagesize MSA 2060) = ~6,39 TiB — das deckt sich mit den ~7,0 TB auf Pool-Ebene.
OCFS2 meldet 3,8T belegt → ca. 2,5 TiB sind stale-Allocations, die vom Filesystem...