Is there a setting somewhere that tells the 'Disks' section to show the drive tree collapsed? I rarely need to see every disk expanded and it would be nice to start with the tree collapsed.
Whenever I open the 'Disks' section, I see this...
But I would like to see it like this without...
I have a 3 node Ceph cluster running Proxmox 7.2. Each node has 4 x HDD OSDs and the 4 OSDs share an Intel Enterprise SSD for the Ceph OSD database (DB/WAL) on each node. I am going to be adding a 5th OSD HDD to each node and also add an additional Intel Enterprise SSD on each node for use with...
Unfortunately netconsole appears to suffer the same issue as USB SERIAL CONSOLE. It's built as a module in the kernel CONFIG_NETCONSOLE=mso you would not see anything until after the kernel modules are loaded, missing GRUB and any issues early in the boot process.
I just figured this issue out. For some reason, on the node where the firewall rules were set, but not working, the pve-firewall was enabled, but not running. I don't know how it got into this enabled, but not runnng state, but once I started it on the CLI, all is well and the rules are working...
I'd like to humbly ask that CONFIG_USB_SERIAL_CONSOLE=y be added to the kernel config so that those who have hardware that lacks a real serial port can attach a USB serial device and see startup messages. Debian denied this request a few years back as "too unusual" of a use case, but I feel that...
pve01: here (This is the node that doesn't block the 70, 80 subnet)
pve02: here
pve03: here
mon01: here
mon02: here
mon03: here
To my uneducated eye, it seems that iptables-save is empty on pve01, but it is not on the other nodes.
I have an odd issue with my cluster-wide firewall that I can't seem to figure out. I want
to limit Proxmox GUI access to one network (192.168.10.0/24) on all nodes. I've been able
to accomplish this on 5 of my 6 nodes, but don't understand why I can't make it work on the
final node.
I have 6...
EDIT: I see there is a bug report on this. Patiently waiting for it to be resolved. :p
I have a fresh install of 2.1 on real hardware. After install, I updated packages and added an old PBS VM as a remote and performed a sync job to keep it's data. I am actively running a backup job now and I'm...
I have had an issue with this as well for some time. It seems harmless, but quite annoying.
102.228065] No source specified
[ 102.237489] fuse: Bad value for 'source'
Happens until all ceph volumes come online for me. Usually occurs for about 150-180 seconds after power up/reboot. Here are my...
@r.jochum Which part? Joined all 6 boxes into one cluster or separated the monitors from the 3 computers that will host VMs, containers and Ceph storage?
I joined all 6 together because I can then administer from a single interface. I chose to keep everything on Proxmox to ease in admin. It...
@Alwin Thanks for that. What I ended up doing was installing Proxmox 6.2 on the 3 new computers, joined them to my Proxmox cluster and set up Ceph monitors on each of the 3 new computers. I then destroyed the monitor instances on my other cluster computers and allowed the new computers to take...
Ok, so I'm a little confused on how to proceed. I have installed, updated and configured Proxmox 6.2 on 3 new computers. Currently, each is it's own standalone node. If I want these 3 new nodes to only be Ceph Monitors for my cluster (cluster "A") how do I proceed? Do these 3 new nodes need to...
Thank you @Alwin. I was thinking it would be "overkill" because I would not be using any other functionality of Proxmox other than Ceph monitoring. For the ease of setup I'll give it a shot though. :) Thank you.
Hello all,
I'm running a 3 node Proxmox 6.2 cluster at home with Ceph Nautilus (14.2.10). Currently, each Proxmox node also acts as a Ceph Monitor node as well. I would like to separate the Ceph monitors from my 3 Proxmox nodes, running the 3 Ceph monitors on 3 dedicated computers in my...
I just double checked all 3 nodes and I only have the correct systemd services enabled for the monitors (ceph-mon@a.service on pve01; ceph-mon@b.service on pve02; ceph-mon@c.service on pve03). Any other ideas as to where else the extra monitors could be sourced from? I'm only seeing them on the...
I'm starting to decode what I think may be the answer to my question. According to ceph-disk docs, ceph-disk, which was used for OSD creation is depreciated in favor of ceph-volume, which Proxmox uses to create LVM's out of raw devices. There are definite gaps in my knowledge here, but after...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.