I do not know how many nodes you utilize. If you have only three of them: https://forum.proxmox.com/threads/proxmox-ceph-performance-with-consumer-grade-samsung-ssd.179948/post-834586
Supplementary: The prompt to do the emergency root login during boot is shown after another failure makes it unlikely the server will complete the boot process without administrator intervention. Logging in will give you a shell with enough...
Da ist schon der erste Fehler. RaidZ mit langsamen HDDs und Performant, das widerspricht sich komplett. Mit 4 HDDs im Mirror wirst du OK Performance haben, aber so eher Schneckentempo.
OK
Wenn du einen LVM Thin-Pool baust, ist das ein Block...
That's worst design possible. Beside the too cheap devices...
Look at one node: when one OSD fails, the other one on the same node has to take over the data from the dead one. It can not be sent to another node because there are already copies...
For once, AI is right :)
Any consumer drive will have low Ceph performance due to rocksDB and sync writes, but those drives in particular are terrible for anything but PC archiving purposes due to it's small SLC cache and very slow QLC nand...
While it's true that 3-nodes is the bare minimum for Ceph, losing a node and depending on the other 2 to pick up the slack workload will make me nervous. For best practices, start with 5-nodes. With Ceph, more nodes/OSDs = more IOPS.
As been...
Nice approach :-)
Even though you have redundant switches I would recommend to prepare a separate wire (not a virtual LAN, 1 GBit/s is sufficient) for one of your multiple corosync rings.
Disclaimer: your approach is multiple levels above my...
It's not clear if you know the following page: https://pve.proxmox.com/wiki/Windows_2025_guest_best_practices - maybe some of those hints are helpful for you...
There is no routing inside of a single IP network. All traffic is forwarded inside the bridge from VMa to VMb in the very same way it is done in a physical switch.
To force a VM/LXC to send all packets to the router you would need to setup a /30...
Why not virtualize that idea?
Create some VMs. Choose a "simple" OS without many bells and whistles. Update them and install relevant software - while being connected to the "normal" LAN. Make snapshots/backups. Setup some bridges without a...
Have a read through https://forum.proxmox.com/threads/fabu-can-i-use-ceph-in-a-_very_-small-cluster.159671/. One challenge with 3 nodes and 3/2 replication is that if you reboot or shut down one node you’re already at the minimum.
Proxmox...
Propably. Whether this is a good idea is quite a different story though. First a NAS and a workstation/gaming pc are actually quite different things and some of their requirements counterdict eachother. For example on a gaming pc or highend...
My very first PBS used only rotating rust. First it worked great. After putting a few TB of actual data on it I had exactly that experience. The listing of backups failed with timeout. Trying it immediately again would succeed - because most part...
The first thing you could try is checking your bios version and updating it.
Is intel-microcode installed? If not apt install intel-microcode
The logs would also be helpful - refer to UdoB's comment for that.
Is it? The relevant drivers for that virtual NIC are already baked in by default?
This very old page is irrelevant nowadays? https://pve.proxmox.com/wiki/Paravirtualized_Network_Drivers_for_Windows
Some minutes later: I've just started a...