Search results

  1. M

    Lost GRUB menu on Serial Console after PBS 4.0 / Trixie Upgrade

    After upgrading PBS from 3 to 4 via in-place upgrade, I no longer see the GRUB boot menu in my serial console. This is my grub config: $ cat /etc/default/grub # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this...
  2. M

    GRUB Hangs When No Monitor Connected or Serial Console Active

    I have a peculiar issue when my headless PBS boots. I do not run the PBS 24x7 as I only need to backup/verify data in small time windows, so I use a vzdump-hook script on my PVE to startup and shutdown my PBS when needed. If I have a monitor connected the PBS boots without issue. If I have a...
  3. M

    ZFS pool disk unavailable on every reboot

    For me, it turned out that I had a failing SSD. Once I replaced it I have had no more issues.
  4. M

    ZFS pool disk unavailable on every reboot

    Unfortunately, I don't have an answer for you on this, but I wanted to put it out there that I have this exact same issue with my Proxmox Backup Server. My ZFS pool is 12 x 1TB SSD (mirrored) and I get this same email/errors on each boot.
  5. M

    Collapsing the Disks Tree

    Is there a setting somewhere that tells the 'Disks' section to show the drive tree collapsed? I rarely need to see every disk expanded and it would be nice to start with the tree collapsed. Whenever I open the 'Disks' section, I see this... But I would like to see it like this without...
  6. M

    Extend LVM of Ceph DB/WAL Disk

    This was my "gut" thinking as well. If you lose one drive in a DB LVM, you lose the entire DB and hence, you lose all the data on the OSD drives.
  7. M

    Extend LVM of Ceph DB/WAL Disk

    I have a 3 node Ceph cluster running Proxmox 7.2. Each node has 4 x HDD OSDs and the 4 OSDs share an Intel Enterprise SSD for the Ceph OSD database (DB/WAL) on each node. I am going to be adding a 5th OSD HDD to each node and also add an additional Intel Enterprise SSD on each node for use with...
  8. M

    FEATURE REQUEST: Please consider adding CONFIG_USB_SERIAL_CONSOLE as a built in to the kernel

    Unfortunately netconsole appears to suffer the same issue as USB SERIAL CONSOLE. It's built as a module in the kernel CONFIG_NETCONSOLE=mso you would not see anything until after the kernel modules are loaded, missing GRUB and any issues early in the boot process.
  9. M

    [SOLVED] Cluster-Wide Firewall Rules Not Working on One Node

    I just figured this issue out. For some reason, on the node where the firewall rules were set, but not working, the pve-firewall was enabled, but not running. I don't know how it got into this enabled, but not runnng state, but once I started it on the CLI, all is well and the rules are working...
  10. M

    FEATURE REQUEST: Please consider adding CONFIG_USB_SERIAL_CONSOLE as a built in to the kernel

    I'd like to humbly ask that CONFIG_USB_SERIAL_CONSOLE=y be added to the kernel config so that those who have hardware that lacks a real serial port can attach a USB serial device and see startup messages. Debian denied this request a few years back as "too unusual" of a use case, but I feel that...
  11. M

    [SOLVED] Cluster-Wide Firewall Rules Not Working on One Node

    pve01: here (This is the node that doesn't block the 70, 80 subnet) pve02: here pve03: here mon01: here mon02: here mon03: here To my uneducated eye, it seems that iptables-save is empty on pve01, but it is not on the other nodes.
  12. M

    [SOLVED] Cluster-Wide Firewall Rules Not Working on One Node

    Thank you! It looks like I hit the limit on characters, so I've attached the output as txt files.
  13. M

    [SOLVED] Cluster-Wide Firewall Rules Not Working on One Node

    I have an odd issue with my cluster-wide firewall that I can't seem to figure out. I want to limit Proxmox GUI access to one network (192.168.10.0/24) on all nodes. I've been able to accomplish this on 5 of my 6 nodes, but don't understand why I can't make it work on the final node. I have 6...
  14. M

    Empty graphs

    EDIT: I see there is a bug report on this. Patiently waiting for it to be resolved. :p I have a fresh install of 2.1 on real hardware. After install, I updated packages and added an old PBS VM as a remote and performed a sync job to keep it's data. I am actively running a backup job now and I'm...
  15. M

    dmeg shows many : fuse: Bad value for 'source'

    I have had an issue with this as well for some time. It seems harmless, but quite annoying. 102.228065] No source specified [ 102.237489] fuse: Bad value for 'source' Happens until all ceph volumes come online for me. Usually occurs for about 150-180 seconds after power up/reboot. Here are my...
  16. M

    [SOLVED] Dedicated Ceph Monitor Nodes

    @r.jochum Which part? Joined all 6 boxes into one cluster or separated the monitors from the 3 computers that will host VMs, containers and Ceph storage? I joined all 6 together because I can then administer from a single interface. I chose to keep everything on Proxmox to ease in admin. It...
  17. M

    [SOLVED] Dedicated Ceph Monitor Nodes

    @Alwin Thanks for that. What I ended up doing was installing Proxmox 6.2 on the 3 new computers, joined them to my Proxmox cluster and set up Ceph monitors on each of the 3 new computers. I then destroyed the monitor instances on my other cluster computers and allowed the new computers to take...
  18. M

    [SOLVED] Dedicated Ceph Monitor Nodes

    Ok, so I'm a little confused on how to proceed. I have installed, updated and configured Proxmox 6.2 on 3 new computers. Currently, each is it's own standalone node. If I want these 3 new nodes to only be Ceph Monitors for my cluster (cluster "A") how do I proceed? Do these 3 new nodes need to...
  19. M

    [SOLVED] Dedicated Ceph Monitor Nodes

    Thank you @Alwin. I was thinking it would be "overkill" because I would not be using any other functionality of Proxmox other than Ceph monitoring. For the ease of setup I'll give it a shot though. :) Thank you.
  20. M

    [SOLVED] Dedicated Ceph Monitor Nodes

    Hello all, I'm running a 3 node Proxmox 6.2 cluster at home with Ceph Nautilus (14.2.10). Currently, each Proxmox node also acts as a Ceph Monitor node as well. I would like to separate the Ceph monitors from my 3 Proxmox nodes, running the 3 Ceph monitors on 3 dedicated computers in my...