Search results

  1. T

    [SOLVED] VMs freeze with 100% CPU

    Hi, I disabled ballooning over a week ago and the VMs are still running without any problems, so it looks like ballooning was the culprit!
  2. T

    [SOLVED] VMs freeze with 100% CPU

    We're trying to find any similarities between VMs that have this issue and wonder if you have ballooning enabled on the VMs? Also we wonder what machine version are configured, we run pc-i440fx-5.1 and pc-i440fx-6.x. Also we experience this issue on memory hungry VMs... Kernel version we run is...
  3. T

    Opt-in Linux 5.19 Kernel for Proxmox VE 7.x available

    Regarding support for Intel SSD DC P4608 we had to compile the kernel (6.2.9) ourself with the following quirk patch to make the kernel discover both NVMe devices. #intel-P4608-quirk.patch --- a/drivers/nvme/host/pci.c 2023-04-11 14:05:32.125909796 +0000 +++ b/drivers/nvme/host/pci.c...
  4. T

    [SOLVED] VMs freeze with 100% CPU

    We also experience this, about once a week for several of our Windows VMs. Any pointers on how to troubleshoot this would be most welcome. I discovered today that if doing a reset the CPU got back to normal levels, but the VM still does not respond to anything in the console or network. Hard to...
  5. T

    Opt-in Linux 5.19 Kernel for Proxmox VE 7.x available

    Yes, I believe the problem lies within here: https://git.kernel.org/pub/scm/linux/kernel/git/srini/nvmem.git/tree/drivers/nvme/host/pci.c#n3406 And it looks like there have been work to mitigate the issue but it dosen't look like these changes are imminent to be merged upstream since there is...
  6. T

    Opt-in Linux 5.19 Kernel for Proxmox VE 7.x available

    Hi We plan to upgrade all of our ceph nodes to kernel-5.19 but we've hit a roadblock. After booting into kernel-5.19.7-2-pve only one out of two NVMe controllers on our Intel DC P4608 (SSDPECKE064T701) are available. This is one PCIe card that have dual controllers. During boot there is an...
  7. T

    Long heartbeat ping times on back interface seen

    Hi! We just upgraded our Ceph nodes to PVE 7.2 with kernel 5.15.39-4 and Ceph 16.2.9 and experience this exact issue with OSD_SLOW_PING_TIME_FRONT/BACK. Previous version PVE 7.1 was running kernel 5.13.19-6 and Ceph 16.2.7 very stable for months. Hardware is Supermicro X11/X12 with Mellanox...
  8. T

    Samba cephfs gateway fails erraticly when reading/writing xattr

    Hi, CephFS is mounted via kernel on hypervisor with proxmox GUI and the 'mount' command returns following for the mouted cephfs: 10.40.28.151,10.40.28.151:6789,10.40.28.152,10.40.28.152:6789,10.40.28.153,10.40.28.153:6789,10.40.28.157,10.40.28.158:/ on /mnt/pve/cephfs type ceph...
  9. T

    Samba cephfs gateway fails erraticly when reading/writing xattr

    Hi We run samba in privileged containers with CTDB utilizing CephFS storage. Samba is version 4.12.8-SerNet-Ubuntu-8.focal running on ubuntu 20.04 in Proxmox LXCs and Ceph is version 14.2.11 also running on Proxmox 6.2. The CephFS volumes is bind-mounted into the container and shared with...
  10. T

    [SOLVED] RX discards on bond with mtu 9000

    Thanks for pointing out the most obvious thing this could be -- and guess what, it was a flaky NIC! After replacing it with a spare NIC there were no more errors or discards :)
  11. T

    [SOLVED] RX discards on bond with mtu 9000

    Hi, we experience some weird RX discards on a few of our Proxmox nodes after we recently switched from single to bonded interfaces for vm-bridges, and we can't seem to figure out why. Since we utilize CEPH, we also need to have access to the CEPH cluster on the same bond, both for VMs/CTs and...
  12. T

    [SOLVED] Interface vlans not created for containers and VMs after uninstalling ifupdown2

    Aha, that explains a lot! We do utilize mellanox connectx3! When I removed bridge-vlan-aware yes everything works as expected and I can also set the MTU to 9000. Thanks for revealing that the X3 cards only support 128 vlans!
  13. T

    [SOLVED] Interface vlans not created for containers and VMs after uninstalling ifupdown2

    I ran ifup -a -d and then I see an error which i find a bit strange: Exception: cmd '/sbin/bridge vlan add vid 125-4094 dev bond0' failed: returned 255 (RTNETLINK answers: No space left on device I'll attach full debug
  14. T

    [SOLVED] Interface vlans not created for containers and VMs after uninstalling ifupdown2

    Hi, below is output of version. Have tried to install ifupdown2 again to no avail. when I spin up a container, the interfaces fwbr163i0, fwln163i0, fwpr163p0 and veth163i0 are created, but vlan on bond0.1000 is not created... root@hk-proxnode-17:~# pveversion -v proxmox-ve: 6.1-2 (running...
  15. T

    [SOLVED] Interface vlans not created for containers and VMs after uninstalling ifupdown2

    Dear Proxmoxers! Strange problem happened to one of our cluster nodes tonight while we were trying to increase the MTU on the bond+vmbr interfaces so we can use 9000 on containers. The need for jumbo frames comes from running ceph gateway containers with samba as frontend for video production...
  16. T

    Tracking Center not in sync

    Um ok -- but I actually did set up an additional node yesterday -- and it works like a charm! Now we have all logs collected at one host that we use for tracking! Don't see why this is not done by PMG by default?
  17. T

    Tracking Center not in sync

    OK, I see. Will it be possible to set up an additional node for proxmox that receive syslogs vi arsyslog from all nodes. Will the resulting syslog be searchable by pmg-log-tracker?
  18. T

    Tracking Center not in sync

    A couple weeks ago I sat up a PMG cluster with 3 nodes. It works fine now, after having some initial problems syncing the database to the third node. I solved that by logging in to postgresql with psql and deleted entries from the cstatic-table. (The log complained about duplicate entries.)...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!