I'm a little bit lost again, I've set up a ThinkCentre Tiny M900 with a WD SN570 NVMe SSD and installed Proxmox.
It uses to run without any problems for days, sometimes for weeks and suddenly the LXCs are malfunctioning, the host system as well. SSH access is not possible anymore, nor is the...
Hi community,
I noticed that in the last few months, ceph is filling the /tmp partition on storage "local" more and more. Currently, there is more than 1,1T of data in there, all with ceph.XXXXXX folders containing cephfs_data subfolders. This hasn't been so "bad" before. Is that a problem with...
I played around with a nested proxmox instance and set up a ceph cluster there with 3 nodes and 3 OSDs.
ceph df shows 50% Usage although all the pools are empty.
Can I clean that up somehow?
# ceph df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 600 GiB...
Hallo,
bin mir nicht sicher ob das zugrundeliegende Debian vielleicht Probleme mit dem Auslesen der SMART Werte der Crucial P2 SSD hat oder ob hier tatsächlich was im Argen liegt. Ich bekomme seit dem Einbau der M.2 SSD regelmäßig Mails von Proxmox smartd:
The following warning/error was...
Wir haben Proxmox mit ZFS für die Systemplatte aufgesetzt, hat bis jetzt auch reibungslos funktioniert, auch Neustarts. Nach dem letzten Kernelupdate hat der Node automatisch einen Neustart gemacht und landet jetzt immer nur noch im GRUB prompt. Versuche, den Resuce mode zu starten, scheitern...
Hi community,
we have a server cluster consisting of 3 nodes with EPYC 7402P 24-Core CPUs and 6 Intel Enterprise SSDs (4620) and 256GB RAM each. Also we have a 10Gbits NIC for Ceph.
SSD performance alone is fine, Jumbo frames are enabled and also iperf gives resonable results in terms of...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.