Whether you do a split or not is basically a matter of preference and constraints. For example my homelab has two mini-pcs which only have two storage slots (one NVME, one SATA). To split system and vms I would have to sacrifice redunancy. I...
hmm, just set the policy to DROP maybe? and disallow everything.. but at that point, why even connect the NIC?
EDIT: Ah, because you want IPv4 only heh. Well, I'm not sure you need ipv6-icmp if you block ipv6, it's enough for me (and do not...
I spent the day tinkering with tcpdump. But how I would be able to see where the packets get dropped from the output eludes me. Maybe someone can enlighten me. I might need to run tcpdump with different parameters. I also saved the output to a...
This is the community forum, if you're not content with the replies you get for free of volunteers to your thread, you might want to check out the paid support offerings.
This is the community forum, if you're not content with the replies you get for free of volunteers to your thread, you might want to check out the paid support offerings.
Ja und das Sekundärbackup deines Masters würde ich halt eher auf einen eigenen Server packen, ich bezog mich auf folgenden Teil:
Ich finde es wenig schlüssig im Master extra viel Platz als Sekundärbackup vorzuhalten, wenn man für den...
Wie stellt man dann sicher, dass die Backups nicht kaputt gehen? Ich verstehe ja, dass die potentielle Dauer der verify-jobs einen davon Abstand nehmen lassen, aber ohne mögliches restore bringen die einen doch nichts.
Da hast du aber irgendein richtiges Problem.
Selbst die PBS mit vielen langsamen HDDs und 200TB+ Daten drauf brauchen maximal eine halbe Stunde für den Prune. GC dann gern mal 4-18 Stunden. Und das ist schon das langsamste Setup was ich betreue...
Thank you for clarification! I think I got it now.
One last follow-up question to
So you say that the 2x2 striped mirror, like in UdoBs example, will be performant enough that I can save the split of system and VMs?
The ones I installed in my 2 servers do not natively appear anywhere in Proxmox, either when installed into an existing PVE 8.3 or installed before a fresh install of PVE 9.1. I wish it were as simple as plug-and-play and let the native kernel do...
I read the changelog and now there's an option in pmg-gui for these cases. I'll update pmg-api to test it.
mail proxy: add checkbox for new 'accept-broken-mime' option.
You can also configure in the verify job, that already verified backups get only re-verified after a certain time, e.G. after 7/14/30/more days. This way new backups will get verified but you don't need to reverify any old data in any verify job...
Da hast du aber irgendein richtiges Problem.
Selbst die PBS mit vielen langsamen HDDs und 200TB+ Daten drauf brauchen maximal eine halbe Stunde für den Prune. GC dann gern mal 4-18 Stunden. Und das ist schon das langsamste Setup was ich betreue...
Nope, you won't buy the two small SSDs, and you will need at least one other 4TB SSD since a striped mirror will need at least four drives.
It's the ZFS equivalent of RAID10, basically you build two mirrors, then stripe then together. This gives...
native kernel driver should work, no need to install drivers from mellanox.
But connect3x are pretty old and they use a different driver than connectx4,5,6... so I don't known if they are still working fine. (I don't have used them since...
Da die NIC up ist, sollte der Server erreichbar sein. Kannst du die IP anpingen? Mal versucht per ssh zu verbinden?
Sonst aus der VM mal etwas externes anpingen.
P.S. dein /20 Subnetz ist schon relativ groß für eine Spielwiese.
Du kannst auch ZFS auf einem HW RAID Volume betreiben.
Dann sollte der ZFS Pool immer Single bleiben und niemals ein zweites vdev dem Pool hinzufügen. Dann läuft das stabil und ohne Probleme.
Natürlich kann man dann Features wie Self Healing...
A hardware RAID controller Battery Backup Unit (BBU) can cache sync writes and that might indeed help. (Or replace the SSDs with ones that have PLP.)
ZFS RAIDz1 is a poor choice for VM as the IOPS are much lower then using a ZFS stripe of...