Why are there 8TB data stored in device_health_metrics?
Do you store RBD data in this pool? You should create a separate pool for RBD data.
https://pve.proxmox.com/wiki/Ceph_Pacific_to_Quincy#Guest_images_are_stored_on_pool_device_health_metrics
I am not a Ceph specialist, but: in an unmodified setup the "failure domain" is "host". So one node may fail without getting into trouble.
This leads to the conclusion that all OSDs of one node may get replaced at once.
Probably I would test...
Hello @parker0909
in each OSD you can use separate device for "Block and WAL"
https://pve.proxmox.com/pve-docs/chapter-pveceph.html#pve_ceph_osd_create
But I never do that in configuration as you described, as this is single point of failure...
this one in particular doesn't really make sense - incus is the same kind of software as PVE (a management tool for containers and Qemu) but with a lot less features..
If you care about your data, buy second hand enterprise drives instead of consumer ones. The performance you are seeing is expected with those drives: once the drive SLC cache is full, writes are very slow.
Also, no sure if the...
AFAIK, they are no way to mirror nvme at hardware level, because they are no nvme controller(or "raid controller" like for sata/sas).
each nvme drive is like a pci device.
So you need to do passthrough 2disks and do softraid inside windows...
Stick with ZFS!
Ceph is great if you have several nodes. If you use it "local only" it makes probably no sense. https://forum.proxmox.com/threads/fabu-can-i-use-ceph-in-a-_very_-small-cluster.159671/
Pro ZFS...
Hello elelayan! Ceph 17.2.8 is now be available for PVE 8 in the no-subscription repo.
You probably know already, but keep in mind that Ceph 17 is no longer maintained since January 2025, so please consider upgrading to Ceph 18 or 19. The wiki...
/etc/ceph/ceph.conf is a symlink to /etc/pve/ceph.cobf on Proxmox nodes.
/etc/pve is the FUSE mounted ckustered Proxmox config database.
You should check why ceph.conf is not available.
BTW: do not run with an even number of MONs. Add one (less...
Guten Morgen!
Bin nochmal alle Ethernet Links durchgegangen. Und siehe da, einer läuft auf 1000Mb/s anstatt auf 25000Mb/s.
Kabeltausch kann leider erst morgen erfolgen. Aber ich bin zuversichtlich, ist bisher der vielversprechenste Ansatz...
Hi, das sieht doch gar nicht so ungewöhnlich aus.
Wenn du lokal auf deine NVMe schreibst, ist das natürlich immer schnell.
Wenn du Ceph nutzt, hast du einen Deamon der den Write annehmen muss und diesen dann über das Netzwerk an die anderen NVMe...
The target ratio of a pool is for the balancer and does not influence the total capacity calculation.
You should set the nearfull_ratio from 0.85 to 0.66.
ceph osd set-nearfull-ratio 0.67
Zertifizieren lassen musst du nur ein Produkt, wo keiner in den Quellcode schauen kann. (Closed Source)
Bei Open Source braucht man keine Zertifizierung und die ist auch nicht so einfach, da bei vielen Projekten nicht eine große Firma dahinter...