Kernel 5.11

We just uploaded a 5.11 kernel into our repositories. The 5.4 kernel is still the default on Proxmox VE 6.x series, 5.11 is an option.

How to install:
  • apt update
  • apt install pve-kernel-5.11
  • reboot
Future updates to the 5.11 kernel will now get installed automatically.

Feedback is welcome!
Hi,

can anyone confirm a ZEN3 working setup with this or the new ISO?
Company is about to invest in ZEN2 (rome) or preferably ZEN3 (milan) setup.

It could take a month or 2 before the hardware will arrive but i don't think PVE7.0 will be ready by then.

Regards,
Martijn
 
We tried Lenovo R350 servers (Intel E5-2630 aka Sandy Bridge EP) servers and couldn't get Ceph to act as a client. OSDs came up and regained health but CephFS wouldn't map and RBD images were inaccessible.

PVE 6.4 with all updates and kernel 5.11.17-1-pve:

Messages in /var/log/syslog:
Code:
Jun  4 17:17:16 kvm6a corosync[2588]:   [KNET  ] pmtud: PMTUD link change for host: 3 link: 0 from 469 to 9109
Jun  4 17:17:16 kvm6a corosync[2588]:   [KNET  ] pmtud: PMTUD link change for host: 2 link: 0 from 469 to 9109
Jun  4 17:17:16 kvm6a corosync[2588]:   [KNET  ] pmtud: Global data MTU changed to: 9109
Jun  4 17:17:17 kvm6a kernel: [   30.872255] libceph: mon0 (1)10.254.1.2:6789 session established
Jun  4 17:17:17 kvm6a kernel: [   30.872474] libceph: mon0 (1)10.254.1.2:6789 socket closed (con state OPEN)
Jun  4 17:17:17 kvm6a kernel: [   30.872503] libceph: mon0 (1)10.254.1.2:6789 session lost, hunting for new mon
Jun  4 17:17:17 kvm6a kernel: [   30.877920] libceph: mon1 (1)10.254.1.3:6789 session established
Jun  4 17:17:17 kvm6a kernel: [   30.926185] libceph: no match of type 1 in addrvec
Jun  4 17:17:17 kvm6a kernel: [   30.926235] libceph: corrupt full osdmap (-2) epoch 81545 off 85622 (0000000011735734 of 00000000de295149-00000000706e0723)
Jun  4 17:17:17 kvm6a kernel: [   30.926310] osdmap: 00000000: 08 07 05 ef 2e 00 09 01 3a 3d 0a 00 2a 55 4d b9  ........:=..*UM.
Jun  4 17:17:17 kvm6a kernel: [   30.926314] osdmap: 00000010: 5d 56 4d 6a a1 e2 e4 f9 8e f1 05 2f 89 3e 01 00  ]VMj......./.>..
Jun  4 17:17:17 kvm6a kernel: [   30.926317] osdmap: 00000020: 5a 3e 93 5c d9 86 0a 22 68 43 ba 60 5e 6e e9 07  Z>.\..."hC.`^n..
Jun  4 17:17:17 kvm6a kernel: [   30.926319] osdmap: 00000030: 06 00 00 00 01 00 00 00 00 00 00 00 1d 05 47 01  ..............G.


This is a production host where we booted it using kernel 5.11 during a maintenance window, in preparation for PVE moving to kernel 5.11 in the next quater.

Restarted it once with no change in behaviour before booting with 5.4.114-1-pve again, after which everything worked perfectly.
 
Last edited:
couldn't get Ceph to act as a client. OSDs came up and regained health but CephFS wouldn't map and RBD images were inaccessible.

To which Ceph version do your PVE host try to talk too? Ceph Nautilus, Octopus (and FWIW Pacific) work here with the 5.11 based kernel
 
Ceph Octopus, our office cluster is using older hardware and functioning perfectly with kernel 5.11 with the same OvS, KRBD and Ceph configuration.

From the node that can't operate as a client when booting 5.11:
Code:
[admin@kvm6a ~]# pveversion -v                                                  proxmox-ve: 6.4-1 (running kernel: 5.4.114-1-pve)
pve-manager: 6.4-6 (running version: 6.4-6/be2fa32c)
pve-kernel-5.4: 6.4-2
pve-kernel-helper: 6.4-2
pve-kernel-5.4.114-1-pve: 5.4.114-1
ceph: 15.2.11-pve1
ceph-fuse: 15.2.11-pve1
corosync: 3.1.2-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.0.3-1
libpve-access-control: 6.4-1
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-3
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-2
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
openvswitch-switch: 2.12.3-1
proxmox-backup-client: 1.1.6-2
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.5-5
pve-cluster: 6.4-1
pve-container: 3.3-5
pve-docs: 6.4-2
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.2-3
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
qemu-server: 6.4-2
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.4-pve1
 
We just uploaded a 5.11 kernel into our repositories. The 5.4 kernel is still the default on Proxmox VE 6.x series, 5.11 is an option.

How to install:
  • apt update
  • apt install pve-kernel-5.11
  • reboot
Future updates to the 5.11 kernel will now get installed automatically.

Feedback is welcome!
Hi Martin,

Kernel 5.11 isn't running on my Intel Nuc 11. After it tries to boot i don't get any signal on the screen i plugged in to see that the Nuc does.

I had to do some troubleshooting to get Proxmox VE 6.4 with the 5.4 kernel installed on the Nuc 11.
 
I don't think anyone has said they cannot use proxmox 5.11 on NUC 11. The issue is purely the Proxmox graphical installer with the NUC 11 gfx card/driver. As long as you're using the latest proxmox ISO, all you need to do during the install is:
  1. Wait until an error about Chrony
  2. Press Alt + F3 to switch to TTY3 console
  3. Generate new Xorg config and swap the driver it uses for the video by running
    1. Xorg -configure
    2. mv /root/xorg.conf.new /etc/X11/xorg.conf
    3. nano /etc/X11/xorg.conf
  4. Find and change "modesetting" driver to "fbdev".
  5. Save the file, exit the editor.
  6. Type "exit" to logout from the TTY3
  7. Move to TTY1 (ALT + F1) and type "startx"
The installer will then finish (sometimes with another error). You just need to then boot into Proxmox, and install 5.11 by running:
  1. apt update
  2. apt install pve-kernel-5.11
  3. reboot
Using the latest Proxmox iso I've not had to do anything special with network adapters etc or mess around with the repository list.
 
Has 5.11 been tuned for performance?

Current Stable 5.4:

Code:
root@prx5 ~ # pveversion -v
proxmox-ve: 6.4-1 (running kernel: 5.4.124-1-pve)
pve-manager: 6.4-9 (running version: 6.4-9/5f5c0e3f)
pve-kernel-5.4: 6.4-4
pve-kernel-helper: 6.4-4
pve-kernel-5.4.124-1-pve: 5.4.124-1

root@prx5 ~ # dd if=/dev/zero of=/data/file15.out bs=1M count=100k
102400+0 records in
102400+0 records out
107374182400 bytes (107 GB, 100 GiB) copied, 33.4635 s, 3.2 GB/s

Testing Kernel 5.11:

Code:
root@prx5 ~ # pveversion -v
proxmox-ve: 6.4-1 (running kernel: 5.11.21-1-pve)
pve-manager: 6.4-9 (running version: 6.4-9/5f5c0e3f)
pve-kernel-5.11: 7.0-2~bpo10
pve-kernel-5.4: 6.4-4
pve-kernel-helper: 6.4-4
pve-kernel-5.11.21-1-pve: 5.11.21-1~bpo10

root@prx5 ~ # dd if=/dev/zero of=/data/file25.out bs=1M count=100k
102400+0 records in
102400+0 records out
107374182400 bytes (107 GB, 100 GiB) copied, 39.4389 s, 2.7 GB/s

Those were consistent results. All the other conditions on the server were equal. ~15% less performance with 5.11. Quite an amazing difference as 5.11 is hailed for its "increased performance" for AMD Zen CPUs. YMMV
 
It appears my issue with Ceph and kernel 5.11 relates to max_osd being greater than the number of OSDs.

Most probably why this can't be reproduced in a lab environment.

I presume the following patch wouldn't be back ported to 5.11, any chance of cherry picking it?


NB: Having max OSD larger than the number of OSDs does not always corrupt the osdmap. We have one cluster humming along perfectly, but couldn't use 5.11 in two other clusters we tried it on.


To check:
Code:
[admin@kvm1 ~]# ceph osd dump | grep max
max_osd 24
[admin@kvm1 ~]# ceph -s | grep osds
osd: 8 osds: 6 up (since 11h), 6 in (since 12h)

In this case the system is susceptible as 24 != 8

NB (edit): So this patch is already included as of 5.11.20, confirmed by finding the commit signature in:
https://mirrors.edge.kernel.org/pub/linux/kernel/v5.x/ChangeLog-5.11.20


Patch:
https://git.kernel.org/pub/scm/linu.../?id=3f1c6f2122fc780560f09735b6d1dbf39b44eb0f


Other discussions relating to this:
https://www.mail-archive.com/ceph-users@ceph.io/msg10920.html
https://stackoverflow.com/questions/67245336/cephfs-seems-to-have-a-problem-with-linux-5-11-kernels
 
Last edited:
Tried latest pve-kernel-5.11.21 and it's working thus far. CephFS does however spew out tons of errors and ceph reports MDS being unavailable. You may simply need to wait but it appeared to start working we cleared blacklisted client/port combinations.

Cluster health:
Code:
[admin@kvm1 ~]# ceph -s
  cluster:
    id:     c2f2d56b-77d3-4c42-bbba-748d86cc2e2a
    health: HEALTH_WARN
            1 filesystem is degraded

  services:
    mon: 2 daemons, quorum kvm1,kvm2 (age 79s)
    mgr: kvm1(active, since 67s), standbys: kvm2
    mds: cephfs:1/1 {0=kvm1=up:reconnect} 1 up:standby
    osd: 8 osds: 6 up (since 27s), 6 in (since 14h)

  task status:

  data:
    pools:   4 pools, 161 pgs
    objects: 378.54k objects, 1.3 TiB
    usage:   2.4 TiB used, 2.5 TiB / 4.9 TiB avail
    pgs:     161 active+clean



mnt-pve-cephfs service detail:
Code:
[admin@kvm1 ~]# systemctl status mnt-pve-cephfs.mount
● mnt-pve-cephfs.mount - /mnt/pve/cephfs
   Loaded: loaded (/run/systemd/system/mnt-pve-cephfs.mount; static; vendor preset: enabled)
   Active: failed (Result: exit-code) since Sat 2021-07-03 11:02:59 SAST; 8s ago
    Where: /mnt/pve/cephfs
     What: 10.0.0.1,10.0.0.2:/

Jul 03 11:02:59 kvm1 systemd[1]: Mounting /mnt/pve/cephfs...
Jul 03 11:02:59 kvm1 mount[3627]: mount error: no mds server is up or the cluster is laggy
Jul 03 11:02:59 kvm1 systemd[1]: mnt-pve-cephfs.mount: Mount process exited, code=exited, status=32/n/a
Jul 03 11:02:59 kvm1 systemd[1]: mnt-pve-cephfs.mount: Failed with result 'exit-code'.
Jul 03 11:02:59 kvm1 systemd[1]: Failed to mount /mnt/pve/cephfs.



Code:
Jul  3 11:02:49 kvm1 systemd[1]: getty@tty1.service: Current command vanished from the unit file, execution of the command list won't be resumed.
Jul  3 11:02:49 kvm1 systemd[1]: Mounting /mnt/pve/cephfs...
Jul  3 11:02:49 kvm1 kernel: [  122.285192] libceph: mon1 (1)10.0.0.2:6789 session established
Jul  3 11:02:49 kvm1 mount[3470]: mount error: no mds server is up or the cluster is laggy
Jul  3 11:02:49 kvm1 systemd[1]: mnt-pve-cephfs.mount: Mount process exited, code=exited, status=32/n/a
Jul  3 11:02:49 kvm1 systemd[1]: mnt-pve-cephfs.mount: Failed with result 'exit-code'.
Jul  3 11:02:49 kvm1 systemd[1]: Failed to mount /mnt/pve/cephfs.
Jul  3 11:02:49 kvm1 kernel: [  122.286555] libceph: client82474213 fsid c2f2d56b-77d3-4c42-bbba-748d86cc2e2a
Jul  3 11:02:49 kvm1 kernel: [  122.286611] ceph: No mds server is up or the cluster is laggy
Jul  3 11:02:49 kvm1 pvestatd[2195]: mount error: See "systemctl status mnt-pve-cephfs.mount" and "journalctl -xe" for details.
Jul  3 11:02:58 kvm1 systemd[1]: Reloading.
Jul  3 11:02:59 kvm1 systemd[1]: Mounting /mnt/pve/cephfs...
Jul  3 11:02:59 kvm1 mount[3627]: mount error: no mds server is up or the cluster is laggy
Jul  3 11:02:59 kvm1 systemd[1]: mnt-pve-cephfs.mount: Mount process exited, code=exited, status=32/n/a
Jul  3 11:02:59 kvm1 systemd[1]: mnt-pve-cephfs.mount: Failed with result 'exit-code'.
Jul  3 11:02:59 kvm1 systemd[1]: Failed to mount /mnt/pve/cephfs.
Jul  3 11:02:59 kvm1 kernel: [  132.210068] libceph: mon0 (1)10.0.0.1:6789 session established
Jul  3 11:02:59 kvm1 kernel: [  132.210475] libceph: client82474234 fsid c2f2d56b-77d3-4c42-bbba-748d86cc2e2a
Jul  3 11:02:59 kvm1 kernel: [  132.210554] ceph: No mds server is up or the cluster is laggy
Jul  3 11:02:59 kvm1 pvestatd[2195]: mount error: See "systemctl status mnt-pve-cephfs.mount" and "journalctl -xe" for details.
Jul  3 11:03:00 kvm1 systemd[1]: Starting Proxmox VE replication runner...
Jul  3 11:03:01 kvm1 systemd[1]: pvesr.service: Succeeded.
Jul  3 11:03:01 kvm1 systemd[1]: Started Proxmox VE replication runner.
Jul  3 11:03:05 kvm1 systemd[1]: Started Session 4 of user admin.
Jul  3 11:03:08 kvm1 systemd[1]: Reloading.
Jul  3 11:03:09 kvm1 systemd[1]: Mounting /mnt/pve/cephfs...
Jul  3 11:03:09 kvm1 mount[3785]: mount error: no mds server is up or the cluster is laggy
Jul  3 11:03:09 kvm1 kernel: [  142.320536] libceph: mon1 (1)10.0.0.2:6789 session established
Jul  3 11:03:09 kvm1 kernel: [  142.321296] libceph: client82474243 fsid c2f2d56b-77d3-4c42-bbba-748d86cc2e2a
Jul  3 11:03:09 kvm1 kernel: [  142.321347] ceph: No mds server is up or the cluster is laggy
Jul  3 11:03:09 kvm1 systemd[1]: mnt-pve-cephfs.mount: Mount process exited, code=exited, status=32/n/a
Jul  3 11:03:09 kvm1 systemd[1]: mnt-pve-cephfs.mount: Failed with result 'exit-code'.
Jul  3 11:03:09 kvm1 systemd[1]: Failed to mount /mnt/pve/cephfs.
Jul  3 11:03:09 kvm1 pvestatd[2195]: mount error: See "systemctl status mnt-pve-cephfs.mount" and "journalctl -xe" for details.
Jul  3 11:03:18 kvm1 systemd[1]: Reloading.
Jul  3 11:03:19 kvm1 systemd[1]: Mounting /mnt/pve/cephfs...
Jul  3 11:03:19 kvm1 kernel: [  152.224645] libceph: mon0 (1)10.0.0.1:6789 session established
Jul  3 11:03:19 kvm1 kernel: [  152.224966] libceph: client82474260 fsid c2f2d56b-77d3-4c42-bbba-748d86cc2e2a
Jul  3 11:03:19 kvm1 kernel: [  152.225030] ceph: No mds server is up or the cluster is laggy
Jul  3 11:03:19 kvm1 mount[3955]: mount error: no mds server is up or the cluster is laggy
Jul  3 11:03:19 kvm1 systemd[1]: mnt-pve-cephfs.mount: Mount process exited, code=exited, status=32/n/a
Jul  3 11:03:19 kvm1 systemd[1]: mnt-pve-cephfs.mount: Failed with result 'exit-code'.
Jul  3 11:03:19 kvm1 systemd[1]: Failed to mount /mnt/pve/cephfs.
Jul  3 11:03:19 kvm1 pvestatd[2195]: mount error: See "systemctl status mnt-pve-cephfs.mount" and "journalctl -xe" for details.
Jul  3 11:03:29 kvm1 systemd[1]: Reloading.
Jul  3 11:03:29 kvm1 systemd[1]: Mounting /mnt/pve/cephfs...
Jul  3 11:03:29 kvm1 kernel: [  162.288591] libceph: mon1 (1)10.0.0.2:6789 session established
Jul  3 11:03:29 kvm1 mount[4172]: mount error: no mds server is up or the cluster is laggy
Jul  3 11:03:29 kvm1 systemd[1]: mnt-pve-cephfs.mount: Mount process exited, code=exited, status=32/n/a
Jul  3 11:03:29 kvm1 systemd[1]: mnt-pve-cephfs.mount: Failed with result 'exit-code'.
Jul  3 11:03:29 kvm1 systemd[1]: Failed to mount /mnt/pve/cephfs.
Jul  3 11:03:29 kvm1 kernel: [  162.289385] libceph: client82474275 fsid c2f2d56b-77d3-4c42-bbba-748d86cc2e2a
Jul  3 11:03:29 kvm1 kernel: [  162.289433] ceph: No mds server is up or the cluster is laggy
Jul  3 11:03:29 kvm1 pvestatd[2195]: mount error: See "systemctl status mnt-pve-cephfs.mount" and "journalctl -xe" for details.



Blacklistings:
Code:
[admin@kvm1 ~]# ceph osd dump | grep blacklist

blacklist 10.0.0.1:6826/2035 expires 2021-07-04T11:02:03.842806+0200
blacklist 10.0.0.1:0/2262594684 expires 2021-07-04T11:02:03.842806+0200
<snip>
blacklist 10.0.0.2:6800/1852913068 expires 2021-07-03T21:09:26.991872+0200
blacklist 10.0.0.2:6801/1852913068 expires 2021-07-03T21:09:26.991872+0200


Clear black listings:
Code:
for i in `ceph osd dump |grep blacklist |awk '{print $2}'`; do
  ceph osd blacklist rm $i;
done


Then it works:
Code:
Jul  3 11:03:48 kvm1 systemd[1]: Reloading.
Jul  3 11:03:48 kvm1 systemd[1]: Mounting /mnt/pve/cephfs...
Jul  3 11:03:48 kvm1 kernel: [  181.767378] libceph: mon0 (1)10.0.0.1:6789 session established
Jul  3 11:03:48 kvm1 kernel: [  181.767679] libceph: client82474296 fsid c2f2d56b-77d3-4c42-bbba-748d86cc2e2a
Jul  3 11:03:48 kvm1 systemd[1]: Mounted /mnt/pve/cephfs.
Jul  3 11:04:00 kvm1 systemd[1]: Starting Proxmox VE replication runner...
Jul  3 11:04:00 kvm1 systemd[1]: pvesr.service: Succeeded.
Jul  3 11:04:00 kvm1 systemd[1]: Started Proxmox VE replication runner.
Jul  3 11:04:08 kvm1 sh[937]: Running command: /usr/sbin/ceph-volume simple trigger 1-048e8b8e-beaa-4752-8483-654727469c42
<snip 29 identical lines>
Jul  3 11:04:08 kvm1 sh[908]: Running command: /usr/sbin/ceph-volume simple trigger 12-4450f955-977e-4bd3-98d6-132568e3b31d
<snip 20 identical lines>
Jul  3 11:04:08 kvm1 sh[918]: Running command: /usr/sbin/ceph-volume simple trigger 0-e7bf2aa0-904e-4353-90a2-fce0ca595d4e
<snip 21 identical lines>
Jul  3 11:04:08 kvm1 sh[908]: Running command: /usr/sbin/ceph-volume simple trigger 12-4450f955-977e-4bd3-98d6-132568e3b31d
<snip 8 identical lines>
Jul  3 11:04:08 kvm1 sh[918]: Running command: /usr/sbin/ceph-volume simple trigger 0-e7bf2aa0-904e-4353-90a2-fce0ca595d4e
<snip 7 identical lines>
Jul  3 11:04:08 kvm1 sh[919]: Running command: /usr/sbin/ceph-volume simple trigger 2-65cfab53-514e-48d0-98d2-aefc61b9f412
<snip 29 identical lines>
Jul  3 11:04:08 kvm1 systemd[1]: ceph-volume@simple-1-048e8b8e-beaa-4752-8483-654727469c42.service: Succeeded.
Jul  3 11:04:08 kvm1 systemd[1]: Started Ceph Volume activation: simple-1-048e8b8e-beaa-4752-8483-654727469c42.
Jul  3 11:04:08 kvm1 systemd[1]: ceph-volume@simple-12-4450f955-977e-4bd3-98d6-132568e3b31d.service: Succeeded.
Jul  3 11:04:08 kvm1 systemd[1]: Started Ceph Volume activation: simple-12-4450f955-977e-4bd3-98d6-132568e3b31d.
Jul  3 11:04:08 kvm1 systemd[1]: ceph-volume@simple-0-e7bf2aa0-904e-4353-90a2-fce0ca595d4e.service: Succeeded.
Jul  3 11:04:08 kvm1 systemd[1]: Started Ceph Volume activation: simple-0-e7bf2aa0-904e-4353-90a2-fce0ca595d4e.
Jul  3 11:04:08 kvm1 systemd[1]: ceph-volume@simple-2-65cfab53-514e-48d0-98d2-aefc61b9f412.service: Succeeded.
Jul  3 11:04:08 kvm1 systemd[1]: Started Ceph Volume activation: simple-2-65cfab53-514e-48d0-98d2-aefc61b9f412.
Jul  3 11:04:08 kvm1 systemd[1]: Reached target Multi-User System.
Jul  3 11:04:08 kvm1 systemd[1]: Reached target Graphical Interface.
Jul  3 11:04:08 kvm1 systemd[1]: Starting Update UTMP about System Runlevel Changes...
Jul  3 11:04:08 kvm1 systemd[1]: systemd-update-utmp-runlevel.service: Succeeded.
Jul  3 11:04:08 kvm1 systemd[1]: Started Update UTMP about System Runlevel Changes.
Jul  3 11:04:08 kvm1 systemd[1]: Startup finished in 18.984s (kernel) + 3min 2.767s (userspace) = 3min 21.751s.
Jul  3 11:04:44 kvm1 systemd[1]: Started Session 6 of user admin.
Jul  3 11:04:55 kvm1 systemd[1]: session-6.scope: Succeeded.



The system however constantly resets, out of space in this post, will put the logs in the follow on one...
Code:
[admin@kvm1 ~]# ceph df
--- RAW STORAGE ---
CLASS  SIZE     AVAIL    USED     RAW USED  %RAW USED
hdd    4.9 TiB  2.5 TiB  2.3 TiB   2.4 TiB      48.21
TOTAL  4.9 TiB  2.5 TiB  2.3 TiB   2.4 TiB      48.21

--- POOLS ---
POOL                   ID  PGS  STORED   OBJECTS  USED     %USED  MAX AVAIL
rbd_hdd                 1  128  1.1 TiB  350.03k  2.1 TiB  43.99    1.4 TiB
cephfs_data            12   16  111 GiB   28.43k  222 GiB   7.40    1.4 TiB
cephfs_metadata        13   16  180 MiB       67  360 MiB   0.01    1.4 TiB
device_health_metrics  17    1  4.6 MiB       11  9.2 MiB      0    1.4 TiB

[admin@kvm1 ~]# ceph osd df
ID  CLASS  WEIGHT   REWEIGHT  SIZE     RAW USE  DATA     OMAP     META      AVAIL    %USE   VAR   PGS  STATUS
10    hdd  0.78819   1.00000  807 GiB  431 GiB  430 GiB  1.4 MiB  1023 MiB  376 GiB  53.39  1.11   56      up
11    hdd  0.78819   1.00000  807 GiB  376 GiB  375 GiB  5.2 MiB  1019 MiB  432 GiB  46.53  0.97   52      up
12    hdd  0.81839         0      0 B      0 B      0 B      0 B       0 B      0 B      0     0    0    down
13    hdd  0.81839   1.00000  838 GiB  398 GiB  397 GiB  765 KiB  1023 MiB  440 GiB  47.51  0.99   53      up
20    hdd  0.78819   1.00000  807 GiB  379 GiB  378 GiB  2.1 MiB  1022 MiB  428 GiB  46.97  0.97   48      up
21    hdd  0.78819   1.00000  807 GiB  358 GiB  357 GiB  792 KiB  1023 MiB  449 GiB  44.40  0.92   48      up
22    hdd  0.81839         0      0 B      0 B      0 B      0 B       0 B      0 B      0     0    0    down
23    hdd  0.90939   1.00000  931 GiB  467 GiB  466 GiB  4.5 MiB  1019 MiB  464 GiB  50.20  1.04   65      up
                       TOTAL  4.9 TiB  2.4 TiB  2.3 TiB   15 MiB   6.0 GiB  2.5 TiB  48.21
MIN/MAX VAR: 0.92/1.11  STDDEV: 2.89



PVE package versions:
Code:
[admin@kvm1a ~]# pveversion -v
proxmox-ve: 6.4-1 (running kernel: 5.11.21-1-pve)
pve-manager: 6.4-9 (running version: 6.4-9/5f5c0e3f)
pve-kernel-5.11: 7.0-2~bpo10
pve-kernel-5.4: 6.4-3
pve-kernel-helper: 6.4-3
pve-kernel-5.11.21-1-pve: 5.11.21-1~bpo10
pve-kernel-5.4.119-1-pve: 5.4.119-1
ceph: 15.2.13-pve1~bpo10
ceph-fuse: 15.2.13-pve1~bpo10
corosync: 3.1.2-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.0.3-1
libpve-access-control: 6.4-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-3
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-3
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
openvswitch-switch: 2.12.3-1
proxmox-backup-client: 1.1.10-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.5-6
pve-cluster: 6.4-1
pve-container: 3.3-5
pve-docs: 6.4-2
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-4
pve-firmware: 3.2-4
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
qemu-server: 6.4-2
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.4-pve1
 
The system stays up for about 20 minutes and then appears to loose network connectivity:
Code:
Jul  3 11:23:00 kvm2 systemd[1]: Starting Proxmox VE replication runner...
Jul  3 11:23:01 kvm2 systemd[1]: pvesr.service: Succeeded.
Jul  3 11:23:01 kvm2 systemd[1]: Started Proxmox VE replication runner.
Jul  3 11:23:22 kvm2 pmxcfs[1989]: [status] notice: received log
Jul  3 11:23:22 kvm2 pmxcfs[1989]: [status] notice: received log
Jul  3 11:23:26 kvm2 corosync[2062]:   [TOTEM ] Token has not been received in 750 ms
Jul  3 11:23:26 kvm2 corosync[2062]:   [TOTEM ] A processor failed, forming new configuration: token timed out (1000ms), waiting 1200ms for consensus.
Jul  3 11:23:27 kvm2 corosync[2062]:   [QUORUM] Sync members[1]: 2
Jul  3 11:23:27 kvm2 corosync[2062]:   [QUORUM] Sync left[1]: 1
Jul  3 11:23:27 kvm2 corosync[2062]:   [TOTEM ] A new membership (2.756) was formed. Members left: 1
Jul  3 11:23:27 kvm2 corosync[2062]:   [TOTEM ] Failed to receive the leave message. failed: 1
Jul  3 11:23:27 kvm2 pmxcfs[1989]: [dcdb] notice: members: 2/1989
Jul  3 11:23:27 kvm2 corosync[2062]:   [QUORUM] This node is within the non-primary component and will NOT provide any services.
Jul  3 11:23:27 kvm2 corosync[2062]:   [QUORUM] Members[1]: 2
Jul  3 11:23:27 kvm2 corosync[2062]:   [MAIN  ] Completed service synchronization, ready to provide service.
Jul  3 11:23:27 kvm2 pmxcfs[1989]: [status] notice: members: 2/1989
Jul  3 11:23:27 kvm2 pmxcfs[1989]: [status] notice: node lost quorum



Managed to catch an instance when this happened and was happily pinging along until the IPMI watchdog timer expired:
Code:
[root@kvm2 ~]# ping 10.0.0.2 -i 0.1 -s 9212
PING 10.0.0.2 (10.0.0.2) 9212(9240) bytes of data.
9220 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms
9220 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=0.042 ms
9220 bytes from 10.0.0.2: icmp_seq=3 ttl=64 time=0.039 ms
9220 bytes from 10.0.0.2: icmp_seq=4 ttl=64 time=0.038 ms
9220 bytes from 10.0.0.2: icmp_seq=5 ttl=64 time=0.039 ms
9220 bytes from 10.0.0.2: icmp_seq=6 ttl=64 time=0.038 ms
<snip>
9220 bytes from 10.0.0.2: icmp_seq=5621 ttl=64 time=0.039 ms
9220 bytes from 10.0.0.2: icmp_seq=5622 ttl=64 time=0.038 ms
9220 bytes from 10.0.0.2: icmp_seq=5623 ttl=64 time=0.037 ms
9220 bytes from 10.0.0.2: icmp_seq=5624 ttl=64 time=0.038 ms
9220 bytes from 10.0.0.2: icmp_seq=5625 ttl=64 time=0.039 ms
9220 bytes from 10.0.0.2: icmp_seq=5626 ttl=64 time=0.039 ms
<other host reset>


There is zero load, all VMs on this Sandbox are in a shutdown state. The host can sometimes be up for an hour but generally not longer than 20 minutes. I presume the culprit to lie with the system possibly sometimes not being able to touch the watchdog count down timer:
Code:
Jul  3 12:04:14 kvm1 kernel: [  592.760000] IPMI Watchdog: response: Error ff on cmd 22
Jul  3 12:04:14 kvm1 watchdog-mux[937]: watchdog update failed: Invalid argument
Jul  3 12:04:36 kvm1 kernel: [  614.112682] IPMI Watchdog: response: The IPMI controller appears to have been reset, will attempt to reinitialize the watchdog timer


I started running ipmitool to see what the timer value was but at that exact moment the problem occurred on the 2nd node.

First node:
Code:
while [ 1 -eq 1 ]; do ipmitool mc watchdog get; sleep 3; done
Watchdog Timer Use:     SMS/OS (0x44)
Watchdog Timer Is:      Started/Running
Watchdog Timer Actions: Power Cycle (0x03)
Pre-timeout interval:   0 seconds
Timer Expiration Flags: 0x00
Initial Countdown:      10 sec
Present Countdown:      10 sec

Watchdog Timer Use:     SMS/OS (0x44)
Watchdog Timer Is:      Started/Running
Watchdog Timer Actions: Power Cycle (0x03)
Pre-timeout interval:   0 seconds
Timer Expiration Flags: 0x00
Initial Countdown:      10 sec
Present Countdown:      10 sec

Watchdog Timer Use:     SMS/OS (0x44)
Watchdog Timer Is:      Started/Running
Watchdog Timer Actions: Power Cycle (0x03)
Pre-timeout interval:   0 seconds
Timer Expiration Flags: 0x00
Initial Countdown:      10 sec
Present Countdown:      9 sec

Watchdog Timer Use:     SMS/OS (0x44)
Watchdog Timer Is:      Started/Running
Watchdog Timer Actions: Power Cycle (0x03)
Pre-timeout interval:   0 seconds
Timer Expiration Flags: 0x00
Initial Countdown:      10 sec
Present Countdown:      9 sec

Watchdog Timer Use:     SMS/OS (0x44)
Watchdog Timer Is:      Started/Running
Watchdog Timer Actions: Power Cycle (0x03)
Pre-timeout interval:   0 seconds
Timer Expiration Flags: 0x00
Initial Countdown:      10 sec
Present Countdown:      9 sec

Watchdog Timer Use:     SMS/OS (0x44)
Watchdog Timer Is:      Started/Running
Watchdog Timer Actions: Power Cycle (0x03)
Pre-timeout interval:   0 seconds
Timer Expiration Flags: 0x00
Initial Countdown:      10 sec
Present Countdown:      9 sec

Watchdog Timer Use:     SMS/OS (0x44)
Watchdog Timer Is:      Started/Running
Watchdog Timer Actions: Power Cycle (0x03)
Pre-timeout interval:   0 seconds
Timer Expiration Flags: 0x00
Initial Countdown:      10 sec
Present Countdown:      9 sec



On the second node the command however timed out and the system shortly thereafter reset:
Code:
[admin@kvm2 ~]# while [ 1 -eq 1 ]; do ipmitool mc watchdog get;
> sleep 3; done
Get Watchdog Timer command failed: Unspecified error



Herewith /var/log/syslog:
Code:
Jul  3 12:06:00 kvm2 systemd[1]: Starting Proxmox VE replication runner...
Jul  3 12:06:00 kvm2 systemd[1]: pvesr.service: Succeeded.
Jul  3 12:06:00 kvm2 systemd[1]: Started Proxmox VE replication runner.
Jul  3 12:06:01 kvm2 CRON[5609]: (root) CMD (run-parts /etc/cron.2minutes)
Jul  3 12:07:00 kvm2 systemd[1]: Starting Proxmox VE replication runner...
Jul  3 12:07:00 kvm2 systemd[1]: pvesr.service: Succeeded.
Jul  3 12:07:00 kvm2 systemd[1]: Started Proxmox VE replication runner.
Jul  3 12:07:25 kvm2 systemd[1]: Started Session 9 of user root.
Jul  3 12:08:00 kvm2 systemd[1]: Starting Proxmox VE replication runner...
Jul  3 12:08:00 kvm2 systemd[1]: pvesr.service: Succeeded.
Jul  3 12:08:00 kvm2 systemd[1]: Started Proxmox VE replication runner.
Jul  3 12:08:09 kvm2 watchdog-mux[858]: watchdog update failed: Invalid argument
Jul  3 12:08:09 kvm2 kernel: [  490.557012] ipmi_si IPI0001:00: IPMI message handler: BMC returned incorrect response, expected netfn 7 cmd 22, got netfn 7 cmd 1
Jul  3 12:08:09 kvm2 kernel: [  490.557023] IPMI Watchdog: response: Error ff on cmd 22
Jul  3 12:08:09 kvm2 kernel: [  490.580243] ipmi_si IPI0001:00: IPMI message handler: BMC returned incorrect response, expected netfn 2d cmd 0, got netfn 7 cmd 22
Jul  3 12:08:09 kvm2 kernel: [  490.609763] ipmi_si IPI0001:00: IPMI message handler: BMC returned incorrect response, expected netfn 7 cmd 25, got netfn 2d cmd 0



I've also noticed that ipmi-sel doesn't work on hosts where I'm sure it did on 5.4. This host doesn't exhibit the above behaviour and is in a stable 5.11 cluster:
Code:
[admin@kvm1g ~]# ipmi-sel
Caching SDR repository information: /root/.freeipmi/sdr-cache/sdr-cache-kvm1g.localhost
Caching SDR record 110 of 110 (current record ID 110)
ipmi_sel_parse: internal IPMI error
[admin@kvm1g ~]# ipmitool mc watchdog get
Watchdog Timer Use:     SMS/OS (0x44)
Watchdog Timer Is:      Started/Running
Watchdog Timer Actions: Power Cycle (0x03)
Pre-timeout interval:   0 seconds
Timer Expiration Flags: 0x00
Initial Countdown:      10 sec
Present Countdown:      9 sec



On a related note, any idea why I can't simply change the default GRUB_DEFAULT=0 in /etc/default/grub to a 1?
The following is a pain to do on a mobile device... ;)
Code:
[admin@kvm2 ~]# grep menu /boot/grub/grub.cfg
if [ x"${feature_menuentry_id}" = xy ]; then
  menuentry_id_option="--id"
  menuentry_id_option=""
export menuentry_id_option
    set timeout_style=menu
set menu_color_normal=cyan/blue
set menu_color_highlight=white/blue
menuentry 'Proxmox Virtual Environment GNU/Linux' --class proxmox --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-b35f2be9-d2ee-4685-ab0f-0272d9b86ca5' {
submenu 'Advanced options for Proxmox Virtual Environment GNU/Linux' $menuentry_id_option 'gnulinux-advanced-b35f2be9-d2ee-4685-ab0f-0272d9b86ca5' {
        menuentry 'Proxmox Virtual Environment GNU/Linux, with Linux 5.11.21-1-pve' --class proxmox --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-5.11.21-1-pve-advanced-b35f2be9-d2ee-4685-ab0f-0272d9b86ca5' {
        menuentry 'Proxmox Virtual Environment GNU/Linux, with Linux 5.4.124-1-pve' --class proxmox --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-5.4.124-1-pve-advanced-b35f2be9-d2ee-4685-ab0f-0272d9b86ca5' {
        menuentry 'Proxmox Virtual Environment GNU/Linux, with Linux 5.4.119-1-pve' --class proxmox --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-5.4.119-1-pve-advanced-b35f2be9-d2ee-4685-ab0f-0272d9b86ca5' {
menuentry "Memory test (memtest86+)" {
menuentry "Memory test (memtest86+, serial console 115200)" {
menuentry "Memory test (memtest86+, experimental multiboot)" {
menuentry "Memory test (memtest86+, serial console 115200, experimental multiboot)" {
# This file provides an easy way to add custom menu entries.  Simply type the
# menu entries you want to add after this comment.  Be careful not to change

[admin@kvm2 ~]# vi /etc/default/grub; update-grub
  GRUB_DEFAULT="gnulinux-simple-b35f2be9-d2ee-4685-ab0f-0272d9b86ca5>gnulinux-5.4.124-1-pve-advanced-b35f2be9-d2ee-4685-ab0f-0272d9b86ca5"
                <submenu identifier> > <menu identifier>
 
Last edited:
Yesterday evening (around 19:00) I installed 5.11. No issues so far. Only thing that is noticable is the increase in CPU usage (Day/average).


Host.pngVM1.pngVM2.pngVM3.pngContainer.png
 
Hi I been using this to test AVIC

Could you please compile a linux-perf 5.11 tool to put in the repo as well to match the kernel? Thanks.
 
Could you please compile a linux-perf 5.11 tool to put in the repo as well to match the kernel? Thanks.
That's already done, you can install them with: apt install linux-tools-5.11
 
  • Like
Reactions: chrcoluk
Hi
Please, will it be possible to remove the "workaround for Debian bug #807000" in a future release ?
It seems that this is no longer necessary as this bug was fixed in initramfs-tools above v1.20, and current is v1.40 in pve 7.
Having nvme-core as built-in module prevents to load it dynamically, which is needed by OFED drivers for Mellanox cards.
We changed "-e CONFIG_BLK_DEV_NVME" to "-m" in debian/rules, compile, reinstall kernel and update-initramfs.
Our boot disks are nvme disks, configured as zfs raid, in efi mode. No problem on reboot.
Thank you
 
Having nvme-core as built-in module prevents to load it dynamically, which is needed by OFED drivers for Mellanox cards.
Seems like a bug from mellanox drivers though, they should be able to handle built-in modules.

But any how, you're right, the workaround is obsolete and I see no reason against reverting it - will do for a future kernel build.
 
Hello

I have an HP ML110 G6 server and i have installed Proxmox Backup Server v1.0 with all latest updates.

I updated server to v2.0 and now i do not have LAN connection.

After some testing i reached to the point where if Proxmox is started with 5.4.128-1-pve kernel it works alright but if it starts with pve-kernel-5.11.22-4-pve it does not have any LAN.

Is there any fix for this?

Best regards
 
This old thread was originally for the 5.11 kernel under Proxmox VE 6.x, you use a different product basing on different major release, for such cases opening a new thread would be better.

Any how, it would be good to have some more information about the HW and the failure state of the network.

In the simplest case only the network interface got renamed and thus you'd just need to update the network configuration to match that. We observed that for some Mellanox cards, they gained the "PCI functions" feature which resulted in the kernel using a new naming scheme (this is noted in the "Known Issue" part of the major changelog).

It could also indeed be a regression with the new kernel + NIC driver and the model in use by your system.
btw. this is a pretty old system, released ~12 years ago...

So, we'd need to have at least the following information:

  • network config (censor any public IPs) cat /etc/network/interfaces
  • network status: ip addr
  • info about the HW/Driver in use: lspci -knn
    • from both: a boot with 5.4 and one with 5.11
Additionally, it would be good if you could check the journal or dmesg for any NIC/PCI/HW related error message after booting the 5.11 kernel.
 
Thank you for your quick reply.

Below are the details you requested. Also we did not detect any errors in the log files after booting with 5.11 kernel.

cat /etc/network/interfaces
auto lo
iface lo inet loopback

auto ens1
iface ens1 inet static
address 192.168.0.12/24
gateway 192.168.0.21

ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 78:e7:d1:f4:cc:c8 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.12/24 scope global ens1
valid_lft forever preferred_lft forever
inet6 2a02:587:b05:38ce:7ae7:d1ff:fef4:ccc8/64 scope global dynamic mngtmpaddr
valid_lft 604779sec preferred_lft 86379sec
inet6 fe80::7ae7:d1ff:fef4:ccc8/64 scope link
valid_lft forever preferred_lft forever

lspci -knn - Under 5.4 Kernel
00:00.0 Host bridge [0600]: Intel Corporation Core Processor DMI [8086:d130] (rev 11)
Subsystem: Hewlett-Packard Company Core Processor DMI [103c:3318]
00:03.0 PCI bridge [0604]: Intel Corporation Core Processor PCI Express Root Port 1 [8086:d138] (rev 11)
Kernel driver in use: pcieport
00:08.0 System peripheral [0880]: Intel Corporation Core Processor System Management Registers [8086:d155] (rev 11)
00:08.1 System peripheral [0880]: Intel Corporation Core Processor Semaphore and Scratchpad Registers [8086:d156] (rev 11)
00:08.2 System peripheral [0880]: Intel Corporation Core Processor System Control and Status Registers [8086:d157] (rev 11)
00:08.3 System peripheral [0880]: Intel Corporation Core Processor Miscellaneous Registers [8086:d158] (rev 11)
00:10.0 System peripheral [0880]: Intel Corporation Core Processor QPI Link [8086:d150] (rev 11)
00:10.1 System peripheral [0880]: Intel Corporation Core Processor QPI Routing and Protocol Registers [8086:d151] (rev 11)
00:1a.0 USB controller [0c03]: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller [8086:3b3c] (rev 05)
Subsystem: Hewlett-Packard Company 5 Series/3400 Series Chipset USB2 Enhanced Host Controller [103c:3118]
Kernel driver in use: ehci-pci
Kernel modules: ehci_pci
00:1c.0 PCI bridge [0604]: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 1 [8086:3b42] (rev 05)
Kernel driver in use: pcieport
00:1c.1 PCI bridge [0604]: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 2 [8086:3b44] (rev 05)
Kernel driver in use: pcieport
00:1c.2 PCI bridge [0604]: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 3 [8086:3b46] (rev 05)
Kernel driver in use: pcieport
00:1c.3 PCI bridge [0604]: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 4 [8086:3b48] (rev 05)
Kernel driver in use: pcieport
00:1c.4 PCI bridge [0604]: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 5 [8086:3b4a] (rev 05)
Kernel driver in use: pcieport
00:1d.0 USB controller [0c03]: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller [8086:3b34] (rev 05)
Subsystem: Hewlett-Packard Company 5 Series/3400 Series Chipset USB2 Enhanced Host Controller [103c:3118]
Kernel driver in use: ehci-pci
Kernel modules: ehci_pci
00:1e.0 PCI bridge [0604]: Intel Corporation 82801 PCI Bridge [8086:244e] (rev a5)
00:1f.0 ISA bridge [0601]: Intel Corporation 3420 Chipset LPC Interface Controller [8086:3b14] (rev 05)
Subsystem: Hewlett-Packard Company 3420 Chipset LPC Interface Controller [103c:3118]
Kernel driver in use: lpc_ich
Kernel modules: lpc_ich
00:1f.2 SATA controller [0106]: Intel Corporation 5 Series/3400 Series Chipset 6 port SATA AHCI Controller [8086:3b22] (rev 05)
Subsystem: Hewlett-Packard Company 5 Series/3400 Series Chipset 6 port SATA AHCI Controller [103c:3118]
Kernel driver in use: ahci
Kernel modules: ahci
00:1f.3 SMBus [0c05]: Intel Corporation 5 Series/3400 Series Chipset SMBus Controller [8086:3b30] (rev 05)
Subsystem: Hewlett-Packard Company 5 Series/3400 Series Chipset SMBus Controller [103c:3318]
Kernel driver in use: i801_smbus
Kernel modules: i2c_i801
1c:00.0 VGA compatible controller [0300]: Matrox Electronics Systems Ltd. MGA G200e [Pilot] ServerEngines (SEP1) [102b:0522] (rev 02)
Subsystem: Hewlett-Packard Company ProLiant DL140 G3 [103c:31fa]
Kernel driver in use: mgag200
Kernel modules: mgag200
1e:00.0 Ethernet controller [0200]: Broadcom Inc. and subsidiaries NetXtreme BCM5723 Gigabit Ethernet PCIe [14e4:165b] (rev 10)
Subsystem: Hewlett-Packard Company NC107i Integrated PCI Express Gigabit Server Adapter [103c:705d]
Kernel driver in use: tg3
Kernel modules: tg3

lspci -knn - Under 5.11 Kernel
00:00.0 Host bridge [0600]: Intel Corporation Core Processor DMI [8086:d130] (rev 11)
Subsystem: Hewlett-Packard Company Core Processor DMI [103c:3318]
00:03.0 PCI bridge [0604]: Intel Corporation Core Processor PCI Express Root Port 1 [8086:d138] (rev 11)
Kernel driver in use: pcieport
00:08.0 System peripheral [0880]: Intel Corporation Core Processor System Management Registers [8086:d155] (rev 11)
00:08.1 System peripheral [0880]: Intel Corporation Core Processor Semaphore and Scratchpad Registers [8086:d156] (rev 11)
00:08.2 System peripheral [0880]: Intel Corporation Core Processor System Control and Status Registers [8086:d157] (rev 11)
00:08.3 System peripheral [0880]: Intel Corporation Core Processor Miscellaneous Registers [8086:d158] (rev 11)
00:10.0 System peripheral [0880]: Intel Corporation Core Processor QPI Link [8086:d150] (rev 11)
00:10.1 System peripheral [0880]: Intel Corporation Core Processor QPI Routing and Protocol Registers [8086:d151] (rev 11)
00:1a.0 USB controller [0c03]: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller [8086:3b3c] (rev 05)
Subsystem: Hewlett-Packard Company 5 Series/3400 Series Chipset USB2 Enhanced Host Controller [103c:3118]
Kernel driver in use: ehci-pci
Kernel modules: ehci_pci
00:1c.0 PCI bridge [0604]: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 1 [8086:3b42] (rev 05)
Kernel driver in use: pcieport
00:1c.1 PCI bridge [0604]: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 2 [8086:3b44] (rev 05)
Kernel driver in use: pcieport
00:1c.2 PCI bridge [0604]: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 3 [8086:3b46] (rev 05)
Kernel driver in use: pcieport
00:1c.3 PCI bridge [0604]: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 4 [8086:3b48] (rev 05)
Kernel driver in use: pcieport
00:1c.4 PCI bridge [0604]: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 5 [8086:3b4a] (rev 05)
Kernel driver in use: pcieport
00:1d.0 USB controller [0c03]: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller [8086:3b34] (rev 05)
Subsystem: Hewlett-Packard Company 5 Series/3400 Series Chipset USB2 Enhanced Host Controller [103c:3118]
Kernel driver in use: ehci-pci
Kernel modules: ehci_pci
00:1e.0 PCI bridge [0604]: Intel Corporation 82801 PCI Bridge [8086:244e] (rev a5)
00:1f.0 ISA bridge [0601]: Intel Corporation 3420 Chipset LPC Interface Controller [8086:3b14] (rev 05)
Subsystem: Hewlett-Packard Company 3420 Chipset LPC Interface Controller [103c:3118]
Kernel driver in use: lpc_ich
Kernel modules: lpc_ich
00:1f.2 SATA controller [0106]: Intel Corporation 5 Series/3400 Series Chipset 6 port SATA AHCI Controller [8086:3b22] (rev 05)
Subsystem: Hewlett-Packard Company 5 Series/3400 Series Chipset 6 port SATA AHCI Controller [103c:3118]
Kernel driver in use: ahci
Kernel modules: ahci
00:1f.3 SMBus [0c05]: Intel Corporation 5 Series/3400 Series Chipset SMBus Controller [8086:3b30] (rev 05)
Subsystem: Hewlett-Packard Company 5 Series/3400 Series Chipset SMBus Controller [103c:3318]
Kernel driver in use: i801_smbus
Kernel modules: i2c_i801
1c:00.0 VGA compatible controller [0300]: Matrox Electronics Systems Ltd. MGA G200e [Pilot] ServerEngines (SEP1) [102b:0522] (rev 02)
Subsystem: Hewlett-Packard Company ProLiant DL140 G3 [103c:31fa]
Kernel driver in use: mgag200
Kernel modules: mgag200
1e:00.0 Ethernet controller [0200]: Broadcom Inc. and subsidiaries NetXtreme BCM5723 Gigabit Ethernet PCIe [14e4:165b] (rev 10)
Subsystem: Hewlett-Packard Company NC107i Integrated PCI Express Gigabit Server Adapter [103c:705d]
Kernel driver in use: tg3
Kernel modules: tg3
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!