Recent content by G0ldmember

  1. G

    [SOLVED] Can't mount Hetzner Storage Box CIFS

    I would also recommend not to use cifs/smb over the internet but rely on SSH instead. You can SSH-mount the storage as well (sshfs).
  2. G

    ESP folder on one Proxmox node quite full

    No I can't uninstall them, because they are not installed ;). Maybe they were installed in the past but as mentioned above, I cannot even find a "rc" entry with dpkg -l for those kernel versions. Also apt autoremove does not remove those directories. # apt purge pve-kernel-5.3.18-3-pve...
  3. G

    ESP folder on one Proxmox node quite full

    Hi, I'm encountering errors on one node in the Proxmox cluster because everytime there is a kernel update, it fails, because the ESP partition is full. I have to manually mount it, delete one old kernel to make the update work. I was analyzing why this only persists on one of the nodes and...
  4. G

    Proxmox falls defekt - wieder gangbar machen.

    Hi, ich benutze immer RescueZilla. Der nutzt zwar "unter der Haube" auch CloneZilla, hat aber nen schönen Wizard und ist etwas einfacher zu bedienen. Je nachdem, wo dein Proxmox installiert ist, sollte RZ die entsprechende Disk anzeigen, diese kannst du dann sichern.
  5. G

    Proxmox falls defekt - wieder gangbar machen.

    Hi, also was natürlich immer geht, wenn du ne kleine Downtime verkraften kannst, sind Tools wie Rescuezilla. Damit einfach starten, komplettes Systemimage irgendwo hin kopieren (externe HDD oder auch übers Netzwerk auf eine Freigabe irgendwo) und dann lässt sich das Ganze jederzeit...
  6. G

    Proxmox with WD Blue SN570 1TB NVMe SSD randomly causes I/O errors

    I'm a little bit lost again, I've set up a ThinkCentre Tiny M900 with a WD SN570 NVMe SSD and installed Proxmox. It uses to run without any problems for days, sometimes for weeks and suddenly the LXCs are malfunctioning, the host system as well. SSH access is not possible anymore, nor is the...
  7. G

    [SOLVED] Ceph tmp folders filling up /tmp on local storage

    Hi community, I noticed that in the last few months, ceph is filling the /tmp partition on storage "local" more and more. Currently, there is more than 1,1T of data in there, all with ceph.XXXXXX folders containing cephfs_data subfolders. This hasn't been so "bad" before. Is that a problem with...
  8. G

    pverados segfault

    6.2.16-10-pve too - but no functional consequences indeed. Still monitoring is complaining about all the "segfaults" in the kern.log
  9. G

    [SOLVED] Ceph showing 50% usage but all pools are empty

    Okay, found the solution here: https://stackoverflow.com/questions/68884564/how-to-expand-ceph-osd-on-lvm-volume ceph-bluestore-tool bluefs-bdev-expand –path <osd path> That needs to be executed as well it seems.
  10. G

    [SOLVED] Ceph showing 50% usage but all pools are empty

    I resized the OSDs from 100G to 200G each. Maybe it has to do with that? Cluster state looks healthy actually. # ceph osd df tree ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME -1 0.29306 - 600 GiB...
  11. G

    move vm to another host

    Have you checked that the destination host is reachable from the src host over the network? Maybe a firewall issue? I also did remote migration already and retrieved the fingerprint from the destination host with pvenode cert info --output-format json | jq -r '.[1]["fingerprint"]'
  12. G

    [SOLVED] Ceph showing 50% usage but all pools are empty

    I played around with a nested proxmox instance and set up a ceph cluster there with 3 nodes and 3 OSDs. ceph df shows 50% Usage although all the pools are empty. Can I clean that up somehow? # ceph df --- RAW STORAGE --- CLASS SIZE AVAIL USED RAW USED %RAW USED hdd 600 GiB...
  13. G

    pverados segfault

    Just out of curiosity: does this only affect the no-subscription repo or also the pve-enterprise repo? The weird thing is, we have 2 clusters consisting of 3 nodes, for one of them the problem also existed in the logs with the older (.3) kernel, for the other cluster, it only appeared after...
  14. G

    SMART Fehler bei neuer Crucial P2 M.2 SSD unter Proxmox

    Hi und danke für die Rückmeldung. Ja dachte mir schon, das dies vermutlich ein false-positive ist. Firmwareupdate ist etwas merkwürdig - habe die P2CR045 laut E-Mail von smartd und bei Crucial auf der Seite ist die neuste angepriesene Firmware für die P2 die P2CR033. Habe dort mal angefragt...
  15. G

    SMART Fehler bei neuer Crucial P2 M.2 SSD unter Proxmox

    Hallo, bin mir nicht sicher ob das zugrundeliegende Debian vielleicht Probleme mit dem Auslesen der SMART Werte der Crucial P2 SSD hat oder ob hier tatsächlich was im Argen liegt. Ich bekomme seit dem Einbau der M.2 SSD regelmäßig Mails von Proxmox smartd: The following warning/error was...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!