Search results

  1. M

    Collapsing the Disks Tree

    Is there a setting somewhere that tells the 'Disks' section to show the drive tree collapsed? I rarely need to see every disk expanded and it would be nice to start with the tree collapsed. Whenever I open the 'Disks' section, I see this... But I would like to see it like this without...
  2. M

    Extend LVM of Ceph DB/WAL Disk

    This was my "gut" thinking as well. If you lose one drive in a DB LVM, you lose the entire DB and hence, you lose all the data on the OSD drives.
  3. M

    Extend LVM of Ceph DB/WAL Disk

    I have a 3 node Ceph cluster running Proxmox 7.2. Each node has 4 x HDD OSDs and the 4 OSDs share an Intel Enterprise SSD for the Ceph OSD database (DB/WAL) on each node. I am going to be adding a 5th OSD HDD to each node and also add an additional Intel Enterprise SSD on each node for use with...
  4. M

    FEATURE REQUEST: Please consider adding CONFIG_USB_SERIAL_CONSOLE as a built in to the kernel

    Unfortunately netconsole appears to suffer the same issue as USB SERIAL CONSOLE. It's built as a module in the kernel CONFIG_NETCONSOLE=mso you would not see anything until after the kernel modules are loaded, missing GRUB and any issues early in the boot process.
  5. M

    [SOLVED] Cluster-Wide Firewall Rules Not Working on One Node

    I just figured this issue out. For some reason, on the node where the firewall rules were set, but not working, the pve-firewall was enabled, but not running. I don't know how it got into this enabled, but not runnng state, but once I started it on the CLI, all is well and the rules are working...
  6. M

    FEATURE REQUEST: Please consider adding CONFIG_USB_SERIAL_CONSOLE as a built in to the kernel

    I'd like to humbly ask that CONFIG_USB_SERIAL_CONSOLE=y be added to the kernel config so that those who have hardware that lacks a real serial port can attach a USB serial device and see startup messages. Debian denied this request a few years back as "too unusual" of a use case, but I feel that...
  7. M

    [SOLVED] Cluster-Wide Firewall Rules Not Working on One Node

    pve01: here (This is the node that doesn't block the 70, 80 subnet) pve02: here pve03: here mon01: here mon02: here mon03: here To my uneducated eye, it seems that iptables-save is empty on pve01, but it is not on the other nodes.
  8. M

    [SOLVED] Cluster-Wide Firewall Rules Not Working on One Node

    Thank you! It looks like I hit the limit on characters, so I've attached the output as txt files.
  9. M

    [SOLVED] Cluster-Wide Firewall Rules Not Working on One Node

    I have an odd issue with my cluster-wide firewall that I can't seem to figure out. I want to limit Proxmox GUI access to one network (192.168.10.0/24) on all nodes. I've been able to accomplish this on 5 of my 6 nodes, but don't understand why I can't make it work on the final node. I have 6...
  10. M

    Empty graphs

    EDIT: I see there is a bug report on this. Patiently waiting for it to be resolved. :p I have a fresh install of 2.1 on real hardware. After install, I updated packages and added an old PBS VM as a remote and performed a sync job to keep it's data. I am actively running a backup job now and I'm...
  11. M

    dmeg shows many : fuse: Bad value for 'source'

    I have had an issue with this as well for some time. It seems harmless, but quite annoying. 102.228065] No source specified [ 102.237489] fuse: Bad value for 'source' Happens until all ceph volumes come online for me. Usually occurs for about 150-180 seconds after power up/reboot. Here are my...
  12. M

    [SOLVED] Dedicated Ceph Monitor Nodes

    @r.jochum Which part? Joined all 6 boxes into one cluster or separated the monitors from the 3 computers that will host VMs, containers and Ceph storage? I joined all 6 together because I can then administer from a single interface. I chose to keep everything on Proxmox to ease in admin. It...
  13. M

    [SOLVED] Dedicated Ceph Monitor Nodes

    @Alwin Thanks for that. What I ended up doing was installing Proxmox 6.2 on the 3 new computers, joined them to my Proxmox cluster and set up Ceph monitors on each of the 3 new computers. I then destroyed the monitor instances on my other cluster computers and allowed the new computers to take...
  14. M

    [SOLVED] Dedicated Ceph Monitor Nodes

    Ok, so I'm a little confused on how to proceed. I have installed, updated and configured Proxmox 6.2 on 3 new computers. Currently, each is it's own standalone node. If I want these 3 new nodes to only be Ceph Monitors for my cluster (cluster "A") how do I proceed? Do these 3 new nodes need to...
  15. M

    [SOLVED] Dedicated Ceph Monitor Nodes

    Thank you @Alwin. I was thinking it would be "overkill" because I would not be using any other functionality of Proxmox other than Ceph monitoring. For the ease of setup I'll give it a shot though. :) Thank you.
  16. M

    [SOLVED] Dedicated Ceph Monitor Nodes

    Hello all, I'm running a 3 node Proxmox 6.2 cluster at home with Ceph Nautilus (14.2.10). Currently, each Proxmox node also acts as a Ceph Monitor node as well. I would like to separate the Ceph monitors from my 3 Proxmox nodes, running the 3 Ceph monitors on 3 dedicated computers in my...
  17. M

    [SOLVED] Proxmox VE 6.0: Ceph Nautilus Extraneous Monitors?

    I removed these old directories and my problem has been solved. Thank you!
  18. M

    [SOLVED] Proxmox VE 6.0: Ceph Nautilus Extraneous Monitors?

    mihanson@pve01:~$ ls -lah /etc/systemd/system/ceph-mon.target.wants/ total 0 lrwxrwxrwx 1 root root 37 May 3 19:26 ceph-mon@a.service -> /lib/systemd/system/ceph-mon@.service mihanson@pve01:~$ sudo ls -lah /var/lib/ceph/mon/ [sudo] password for mihanson: total 16K drwxr-xr-x 4 ceph ceph 4.0K...
  19. M

    [SOLVED] Proxmox VE 6.0: Ceph Nautilus Extraneous Monitors?

    I just double checked all 3 nodes and I only have the correct systemd services enabled for the monitors (ceph-mon@a.service on pve01; ceph-mon@b.service on pve02; ceph-mon@c.service on pve03). Any other ideas as to where else the extra monitors could be sourced from? I'm only seeing them on the...
  20. M

    [SOLVED] Proxmox VE 6.0: New Ceph OSD, but GPT = No

    I'm starting to decode what I think may be the answer to my question. According to ceph-disk docs, ceph-disk, which was used for OSD creation is depreciated in favor of ceph-volume, which Proxmox uses to create LVM's out of raw devices. There are definite gaps in my knowledge here, but after...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!