We're trying to find any similarities between VMs that have this issue and wonder if you have ballooning enabled on the VMs? Also we wonder what machine version are configured, we run pc-i440fx-5.1 and pc-i440fx-6.x. Also we experience this issue on memory hungry VMs... Kernel version we run is...
Regarding support for Intel SSD DC P4608 we had to compile the kernel (6.2.9) ourself with the following quirk patch to make the kernel discover both NVMe devices.
#intel-P4608-quirk.patch
--- a/drivers/nvme/host/pci.c 2023-04-11 14:05:32.125909796 +0000
+++ b/drivers/nvme/host/pci.c...
We also experience this, about once a week for several of our Windows VMs. Any pointers on how to troubleshoot this would be most welcome. I discovered today that if doing a reset the CPU got back to normal levels, but the VM still does not respond to anything in the console or network. Hard to...
Yes, I believe the problem lies within here:
https://git.kernel.org/pub/scm/linux/kernel/git/srini/nvmem.git/tree/drivers/nvme/host/pci.c#n3406
And it looks like there have been work to mitigate the issue but it dosen't look like these changes are imminent to be merged upstream since there is...
Hi
We plan to upgrade all of our ceph nodes to kernel-5.19 but we've hit a roadblock. After booting into kernel-5.19.7-2-pve only one out of two NVMe controllers on our Intel DC P4608 (SSDPECKE064T701) are available. This is one PCIe card that have dual controllers. During boot there is an...
Hi!
We just upgraded our Ceph nodes to PVE 7.2 with kernel 5.15.39-4 and Ceph 16.2.9 and experience this exact issue with OSD_SLOW_PING_TIME_FRONT/BACK.
Previous version PVE 7.1 was running kernel 5.13.19-6 and Ceph 16.2.7 very stable for months.
Hardware is Supermicro X11/X12 with Mellanox...
Hi,
CephFS is mounted via kernel on hypervisor with proxmox GUI and the 'mount' command returns following for the mouted cephfs:
10.40.28.151,10.40.28.151:6789,10.40.28.152,10.40.28.152:6789,10.40.28.153,10.40.28.153:6789,10.40.28.157,10.40.28.158:/ on /mnt/pve/cephfs type ceph...
Hi
We run samba in privileged containers with CTDB utilizing CephFS storage. Samba is version 4.12.8-SerNet-Ubuntu-8.focal running on ubuntu 20.04 in Proxmox LXCs and Ceph is version 14.2.11 also running on Proxmox 6.2. The CephFS volumes is bind-mounted into the container and shared with...
Thanks for pointing out the most obvious thing this could be -- and guess what, it was a flaky NIC! After replacing it with a spare NIC there were no more errors or discards :)
Hi, we experience some weird RX discards on a few of our Proxmox nodes after we recently switched from single to bonded interfaces for vm-bridges, and we can't seem to figure out why. Since we utilize CEPH, we also need to have access to the CEPH cluster on the same bond, both for VMs/CTs and...
Aha, that explains a lot! We do utilize mellanox connectx3! When I removed bridge-vlan-aware yes everything works as expected and I can also set the MTU to 9000. Thanks for revealing that the X3 cards only support 128 vlans!
I ran ifup -a -d and then I see an error which i find a bit strange:
Exception: cmd '/sbin/bridge vlan add vid 125-4094 dev bond0' failed: returned 255 (RTNETLINK answers: No space left on device
I'll attach full debug
Hi, below is output of version.
Have tried to install ifupdown2 again to no avail. when I spin up a container, the interfaces fwbr163i0, fwln163i0, fwpr163p0 and veth163i0 are created, but vlan on bond0.1000 is not created...
root@hk-proxnode-17:~# pveversion -v
proxmox-ve: 6.1-2 (running...
Dear Proxmoxers!
Strange problem happened to one of our cluster nodes tonight while we were trying to increase the MTU on the bond+vmbr interfaces so we can use 9000 on containers. The need for jumbo frames comes from running ceph gateway containers with samba as frontend for video production...
Um ok -- but I actually did set up an additional node yesterday -- and it works like a charm! Now we have all logs collected at one host that we use for tracking! Don't see why this is not done by PMG by default?
OK, I see. Will it be possible to set up an additional node for proxmox that receive syslogs vi arsyslog from all nodes. Will the resulting syslog be searchable by pmg-log-tracker?
A couple weeks ago I sat up a PMG cluster with 3 nodes. It works fine now, after having some initial problems syncing the database to the third node. I solved that by logging in to postgresql with psql and deleted entries from the cstatic-table. (The log complained about duplicate entries.)...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.