No no no. You got your math wrong.
To achieve the same availability as EC with k=6 and m=2 you need triple replication (three copies) meaning a storage efficiency of 33%. It is rarely necessary to go beyond 4 copies.
"lower" and "higher" are subjective. Ceph achieves HA using raw capacity.
suit yourself. this is not a recommended deployment. You are far better served by just having two SEPERATE VMs each serving all those functions without any ceph at all-...
The number of OSDs isn't relevant to a pool as long as it is larger then the minimum required by the crush rule. For example, If you have an EC profile of K=8,N=2 rule, you need a minimum of 10 OSDs DISTRIBUTED ACROSS 10 NODES. so 1 OSD per node...
You can limit ARC: https://pve.proxmox.com/wiki/ZFS_on_Linux#sysadmin_zfs_limit_memory_usage
(While the ARC has the capability to shrink on demand, this mechanism is often to slow for a sudden request --> OOM)
@garfield2008 ,
Gut, damit ist die Ursache bestätigt.
Was sich geändert hat: Vermutlich haben bei der Neuinstallation/dem Update die OVS-Bridges oder Bonds die MTU 9000 von den NICs übernommen (OVS negotiiert die MTU automatisch anhand der...
Sehr informativ, danke. Leider noch keine Bewegung, der Scrub vom Sonntag dauerte vorhin immer noch an.
Habe den abgebrochen, da tat sich nix in der Anzeige der Belegung, auch nicht nach fstrim (diesmal wurden etwa 200GB frei gegeben).
Nun ist...
Yes correct... I configured my SDN underlay network via ‘Fabrics’ using OSPF, and everything works perfectly. The traffic runs exactly over my dedicated VLAN into my switch fabric.
But my problem now is: when I attach a guest VM (with SNAT or...
3 x AMD EPYC 7713, 512GB RAM, 2x1TB SSD (RAID 1, OS), 5 x 3.84TB NVMe (Ceph)
Networking:
- 2 port embedded 1G NIC: 1 port for public internet access, 1 port for private network - both connected to switch ports acting as access ports to different...
Hey there,
I am currently running a WIN2019 Server on Proxmox.
On my tasklist is the UEFI certificate stuff ;-)
In one of the previous posts I see, that for new instances the setting is applied.
But for the existing one?
Fiona stated, that...
My Proxmox node reboots during backup once a month or two.
No errors in kernel or journal
Sometimes warning about memory pressure, but ram mostly occupied by zfs.
My Proxmox node reboots during backup once a month or two.
No errors in kernel or journal
Sometimes warning about memory pressure, but ram mostly occupied by zfs.
Das passiert bei CTs leider üblicherweise nicht. Zumindestens nicht vor Beendigung.
Bitte nicht. Snapshots ersetzen ein richtiges Backup genau so wenig wie RAID. Ich empfehle immer beides. Schau dir auch mal cv4pve-autosnap an
Schau auch mal den...
Ich glaub ich habe die Lösung.
Habe gerade mal bei der VM die MTU auf 1500 begrenzt und siehe da, ich komme mit allen Protokollen auf alle PVE/PBS. Bleibt also die Frage, was hat sich vor einem halben Jahr geändert?
Hi there,
we are running an Proxmoy PVE Ceph cluster. Current configuration is:
3 Nodes
Each 2 x 10GBit/s LACP to 4 switches (only for ceph)
2 x Intel Xeon E5-2640 2,6GHzs
192GB RAM
Each 5 Crucial SATA SSD CT2000MX500SSD1 on HBA
But currently...
"lower" and "higher" are subjective. Ceph achieves HA using raw capacity.
suit yourself. this is not a recommended deployment. You are far better served by just having two SEPERATE VMs each serving all those functions without any ceph at all-...