No I can't uninstall them, because they are not installed ;).
Maybe they were installed in the past but as mentioned above, I cannot even find a "rc" entry with dpkg -l for those kernel versions.
Also apt autoremove does not remove those directories.
# apt purge pve-kernel-5.3.18-3-pve...
Hi,
I'm encountering errors on one node in the Proxmox cluster because everytime there is a kernel update, it fails, because the ESP partition is full. I have to manually mount it, delete one old kernel to make the update work.
I was analyzing why this only persists on one of the nodes and...
Hi,
ich benutze immer RescueZilla. Der nutzt zwar "unter der Haube" auch CloneZilla, hat aber nen schönen Wizard und ist etwas einfacher zu bedienen. Je nachdem, wo dein Proxmox installiert ist, sollte RZ die entsprechende Disk anzeigen, diese kannst du dann sichern.
Hi,
also was natürlich immer geht, wenn du ne kleine Downtime verkraften kannst, sind Tools wie Rescuezilla. Damit einfach starten, komplettes Systemimage irgendwo hin kopieren (externe HDD oder auch übers Netzwerk auf eine Freigabe irgendwo) und dann lässt sich das Ganze jederzeit...
I'm a little bit lost again, I've set up a ThinkCentre Tiny M900 with a WD SN570 NVMe SSD and installed Proxmox.
It uses to run without any problems for days, sometimes for weeks and suddenly the LXCs are malfunctioning, the host system as well. SSH access is not possible anymore, nor is the...
Hi community,
I noticed that in the last few months, ceph is filling the /tmp partition on storage "local" more and more. Currently, there is more than 1,1T of data in there, all with ceph.XXXXXX folders containing cephfs_data subfolders. This hasn't been so "bad" before. Is that a problem with...
Okay, found the solution here: https://stackoverflow.com/questions/68884564/how-to-expand-ceph-osd-on-lvm-volume
ceph-bluestore-tool bluefs-bdev-expand –path <osd path>
That needs to be executed as well it seems.
I resized the OSDs from 100G to 200G each. Maybe it has to do with that? Cluster state looks healthy actually.
# ceph osd df tree
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME
-1 0.29306 - 600 GiB...
Have you checked that the destination host is reachable from the src host over the network? Maybe a firewall issue?
I also did remote migration already and retrieved the fingerprint from the destination host with
pvenode cert info --output-format json | jq -r '.[1]["fingerprint"]'
I played around with a nested proxmox instance and set up a ceph cluster there with 3 nodes and 3 OSDs.
ceph df shows 50% Usage although all the pools are empty.
Can I clean that up somehow?
# ceph df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 600 GiB...
Just out of curiosity: does this only affect the no-subscription repo or also the pve-enterprise repo?
The weird thing is, we have 2 clusters consisting of 3 nodes, for one of them the problem also existed in the logs with the older (.3) kernel, for the other cluster, it only appeared after...
Hi und danke für die Rückmeldung. Ja dachte mir schon, das dies vermutlich ein false-positive ist. Firmwareupdate ist etwas merkwürdig - habe die P2CR045 laut E-Mail von smartd und bei Crucial auf der Seite ist die neuste angepriesene Firmware für die P2 die P2CR033. Habe dort mal angefragt...
Hallo,
bin mir nicht sicher ob das zugrundeliegende Debian vielleicht Probleme mit dem Auslesen der SMART Werte der Crucial P2 SSD hat oder ob hier tatsächlich was im Argen liegt. Ich bekomme seit dem Einbau der M.2 SSD regelmäßig Mails von Proxmox smartd:
The following warning/error was...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.