Search results

  1. F

    How to recover from HW failure on a 3 nodes cluster - VMs are hidden

    Thanks for the heads up. It works. Is there a way to handle the loss of a node from the UI ?
  2. F

    How to recover from HW failure on a 3 nodes cluster - VMs are hidden

    Hi, How should we recover from a lost node due to HW failure on a 3 nodes cluster ? I can't see the VMs anymore, and have no direct way to restart them on remaining nodes. Best guess is that I should remove the failing node from the cluster, but unfortunately cluster view do not really help...
  3. F

    Grub Menu not working over serial console

    Grub over serial0 My console is of type Serial, which is great. But when I try to access the grub menu, it's just frozen, and I can't do anything else than booting the default. Is it possible to select another Grub entry than the default when Display is set to Serial terminal 0 (serial0) ? Thanks,
  4. F

    NVME SSD durability under Proxmox

    Hi, I'm very satisfied with the SN850x, definitely a good value. But it's probably not the only one. Check around, check comparisons online in video or texts... SN850x 4TB is 4GB of DRAM (for read cache), pSLC for write cache (volume is usually not announed), and TLC based. It also exists in...
  5. F

    Slow 10Gb network

    Install iperf3 on all the nodes: apt update apt install iperf3 On one of the node pve2 in this case: root@pve2:~# iperf3 -s From another node - pve1 in this case: root@pve1:~# iperf3 -c pve2 Connecting to host pve2, port 5201 [ 5] local 192.168.68.5 port 57662 connected to 192.168.68.130...
  6. F

    Slow 10Gb network

    MTU won't limit you that bad. I even abandonned playing with that, and stick to the default 1500b because it generated terrible situations, difficult to debug, for close to no benefit. However, as propose @jamiemoles , definitely check your network interfaces, and make sure the "Errors"...
  7. F

    NVME SSD durability under Proxmox

    Hi, You have to look at the endurance specs of your SSDs, especially DWPD ((full) Disk (capacity) Write per day) or the TBW (TerraBytes Written). For low budget, I prefered to take only 1 SSD for everything, but opted to a quite high end home one with 2400 TBW ...
  8. F

    7.4-17 with memtest86 to 8.1.4 upgrade hangs at configure

    Another hang a bit later makes me thing something might be wrong with my system: Setting up proxmox-kernel-6.5.11-8-pve-signed (6.5.11-8) ... Examining /etc/kernel/postinst.d. run-parts: executing /etc/kernel/postinst.d/initramfs-tools 6.5.11-8-pve /boot/vmlinuz-6.5.11-8-pve update-initramfs...
  9. F

    7.4-17 with memtest86 to 8.1.4 upgrade hangs at configure

    In place upgrade of 7.4-17 with memtest86 installed to 8.1.4 hangs at Unpacking zfs-zed (2.2.2-pve1) over (2.1.14-pve1) ... Setting up memtest86+ (6.10-4) ... Installing new version of config file /etc/grub.d/20_memtest86+ ... root@pve3:~# ps -ef | rg '\bmemtest' root 412929 412928 0...
  10. F

    What is the best way to mount a CephFS inside LXC

    shared=1 might be the magic word. I'll try when back from holidays. Tks.
  11. F

    What is the best way to mount a CephFS inside LXC

    Seems a recurring problem, and did not find helpful solution so far. There are hundreds of cases you might need to mount CephFS inside your LXCs. In my case, I need to store PostgreSQL WAL archives for PITR. Performances are not important. I have a 3 nodes PVE cluster. I tested : Standard...
  12. F

    Timeout loading datastore content

    Hi, Same problem : I have an (rclone encrypted) datastore on a S3 cloud storage. The dumps are working, but I have a timeout when listing the backups on that datastore through the GUI. The only way is to list them via command-line ls on the host. The initial listing can take up to a minute, and...
  13. F

    [Fixed][ceph][mgr][snap_schedule] : sqlite3.OperationalError: unable to open database file

    Turning this thread as "Fixed" since probably related problems have been discussed here: - https://www.spinics.net/lists/ceph-users/msg74696.html - https://tracker.ceph.com/issues/57851 And the proposed fix also fixes the above problem ...
  14. F

    [Fixed][ceph][mgr][snap_schedule] : sqlite3.OperationalError: unable to open database file

    On every reboot or power loss, my ceph managers are crashing, and the cephfs snap_schedule is not working since 2023-02-05-18. The ceph mgr starts anyway, and generates a crash report turning the ceph cluster in HEALTH_WARN status. I have the issue on every node (3 nodes cluster). Probably since...
  15. F

    [pve 7.3-4] invalid privilege 'Sys.audit' (500)

    Thanks for the quick reply Fiona, While waiting for the bugfix in 7.3-5, the workaround is easy: use the root user with PAM authentication (maybe other user with PAM auth ?) use the command line ceph osd status to get the osd id, then use systemctl. i.e. for osd 0: systemctl restart ceph-osd@0
  16. F

    [pve 7.3-4] invalid privilege 'Sys.audit' (500)

    Hi, I'm getting the invalid privilege 'Sys.audit' (500) popup error message when I want to Stop an OSD with a "Proxmox VE authentication server" user that has Administrator as only role. The problem does not show when using the root user with PAM authentication What is strange, is that...
  17. F

    cephfs mount inside LXC

    Did you find any solution ? Using mount point (mp0) (only via command-line), does the job. BUT, it's then impossible to migrate the VM to another host. Using the kernel driver or the fuse driver both fails because of missing modules... 1. Did you find any solution to mount a cephfs inside an...
  18. F

    [Solved] HTML table broken for vzdump backup status email reports (PVE 7.2-14)

    Thanks. Same conclusion. I mark it as solved for now. I'll try to send them to another mail client / or gmail.
  19. F

    [Solved] HTML table broken for vzdump backup status email reports (PVE 7.2-14)

    Sure ... email headers ... This is a multi-part message in MIME format. ------_=_NextPart_001_16685657933653369 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 8bit VMID NAME STATUS TIME SIZE FILENAME 120 dns1...