Recent content by tsimblist

  1. T

    kernel 6.5.11-8-pve issue with task:pvescheduler

    I suspected that might be the case. I did a little checking and there does not seem to be any QoS option for VLANs on my UniFi switches. So, it was a good learning exercise, but I haven't actually fixed anything yet. I am now contemplating an 8 port switch for a truly private network. Thanks...
  2. T

    kernel 6.5.11-8-pve issue with task:pvescheduler

    I created a new vlan so I could implement a separate cluster network. I used my nested PVE test cluster (virtual machines) to learn how to change the cluster network configuration. Once I had that figured out, I made the configuration changes to my bare metal cluster and verified it was...
  3. T

    kernel 6.5.11-8-pve issue with task:pvescheduler

    There is a separate vlan for Ceph traffic. But corosync does share the default network with everything else. This is a homelab with low traffic volume and it has not been a problem until now. Nodes 1, 2 & 3 (see below) were all running 6.5.11-8-pve at that point. Node 3 was the first to be...
  4. T

    kernel 6.5.11-8-pve issue with task:pvescheduler

    No, I use Proxmox Backup Server. However, the PBS was offline when this issue presented itself. I have attached an excerpt from the syslog from about two minutes before the issue was reported on the serial console until I forced a reboot with a machine reset. I had issued a reboot command to...
  5. T

    kernel 6.5.11-8-pve issue with task:pvescheduler

    Package versions proxmox-ve: 8.1.0 (running kernel: 6.5.11-7-pve) pve-manager: 8.1.4 (running version: 8.1.4/ec5affc9e41f1d79) proxmox-kernel-helper: 8.1.0 pve-kernel-6.2: 8.0.5 proxmox-kernel-6.5: 6.5.11-8 proxmox-kernel-6.5.11-8-pve-signed: 6.5.11-8 proxmox-kernel-6.5.11-7-pve-signed...
  6. T

    kernel 6.5.11-8-pve issue with task:pvescheduler

    I applied updates to some of the nodes in my Proxmox cluster this morning. This included the new kernel 6.5.11-8-pve. I rebooted the third server and it came back up with some issues. It didn't seem to have reported in with the Proxmox cluster. There was a little red x next to the Server in...
  7. T

    Latest Update 5.13.19-4-pve broke my QEMU PCIe Sharing. Works with 5.13.19-3

    Same issue for me. It throws an error to the serial console. See below starting at 38.667995:
  8. T

    osd move issue

    I tried this process yesterday with my homelab and was getting the same error about "failed to read label for /dev/ceph-.../osd-block-..." I did a little poking around and discovered that it was a permissions issue with device mapper for LVM. root@epyc3251:~# ls -al...
  9. T

    Ceph with multipath

    I did go ahead and connect the second SAS cable from the HBA to the redundant expander in the shelf. And then crawled up the learning curve to get multipath working. At that time, I had three 1 TB spinners using a shared 500 GB SSD for their DB/WAL device plus three 500 GB spinners using a...
  10. T

    Ceph with multipath

    This is almost exactly what I want to do. I have a simple SAS shelf with one HBA. Currently I have one SAS cable from the HBA to one expander in the shelf. I want to connect a second SAS cable from the HBA to the redundant expander in the shelf. This suggests improved bandwidth to the SATA...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!