So, I'm following the instructions found here: https://www.reddit.com/r/Proxmox/comments/cm81tc/tutorial_enabling_sriov_for_intel_nic_x550t2_on/
And I've updated my grub and /etc/modules file, but when I try to verify that they're functioning, dmesg doesn't return anything.
When running update-grub, the following is given:
When I run undate-intramfs, I get:
Seeing as there appears to be multiple kernels, I ran the Kernel Cleanup script found on https://tteck.github.io/Proxmox/. It initially reported the old kernels and I had them removed and when I run it now it shows that only 6.2.16-9 is installed. Yet, after running the above again it shows all the old kernels.
Digging deeper, I run proxmox-boot-tool:
I run lsblk and I can see the "/" mount point:
So, it made me curious about maybe I messed something up during the initial install, which would suck. I found this article: https://pve.proxmox.com/wiki/ZFS:_Switch_Legacy-Boot_to_Proxmox_Boot_Tool but I'm afraid of running anything as I don't want to lose my setup with VMs and Containers.
Proxmox isn't meant to be a storage server for me, it's main purpose was to be for VMs and containers and offload backups, etc to my Synology NAS, and I didn't think I needed ZFS during install as I was concerned about memory usage.
Anyway, I was hoping someone can help me figure out what I can do to solve the first issue here with the update-grub and whatnot so that I can continue to try and get SR-IOV working on my Intel T2-X550 ethernet cards.
Any advice would be greatly appreciated. Please let me know if you need any additional information.
Thanks,
AJ
And I've updated my grub and /etc/modules file, but when I try to verify that they're functioning, dmesg doesn't return anything.
When running update-grub, the following is given:
Bash:
update-grub
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-6.2.16-9-pve
Found initrd image: /boot/initrd.img-6.2.16-9-pve
Found linux image: /boot/vmlinuz-6.2.16-5-pve
Found initrd image: /boot/initrd.img-6.2.16-5-pve
Found linux image: /boot/vmlinuz-6.2.16-4-pve
Found initrd image: /boot/initrd.img-6.2.16-4-pve
Found linux image: /boot/vmlinuz-5.15.108-1-pve
Found initrd image: /boot/initrd.img-5.15.108-1-pve
Found linux image: /boot/vmlinuz-5.15.74-1-pve
Found initrd image: /boot/initrd.img-5.15.74-1-pve
Found memtest86+ 64bit EFI image: /boot/memtest86+x64.efi
Adding boot menu entry for UEFI Firmware Settings ...
done
When I run undate-intramfs, I get:
Bash:
update-initramfs -u -k all
update-initramfs: Generating /boot/initrd.img-6.2.16-9-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.
update-initramfs: Generating /boot/initrd.img-6.2.16-5-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.
update-initramfs: Generating /boot/initrd.img-6.2.16-4-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.
update-initramfs: Generating /boot/initrd.img-5.15.108-1-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.
update-initramfs: Generating /boot/initrd.img-5.15.74-1-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.
Seeing as there appears to be multiple kernels, I ran the Kernel Cleanup script found on https://tteck.github.io/Proxmox/. It initially reported the old kernels and I had them removed and when I run it now it shows that only 6.2.16-9 is installed. Yet, after running the above again it shows all the old kernels.
Digging deeper, I run proxmox-boot-tool:
Bash:
proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
E: /etc/kernel/proxmox-boot-uuids does not exist.
I run lsblk and I can see the "/" mount point:
Bash:
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 7.3T 0 disk
sdb 8:16 0 7.3T 0 disk
sdc 8:32 0 465.8G 0 disk
├─sdc1 8:33 0 1007K 0 part
├─sdc2 8:34 0 512M 0 part /boot/efi
└─sdc3 8:35 0 465.3G 0 part
├─pve-swap 253:0 0 8G 0 lvm [SWAP]
└─pve-root 253:1 0 457.3G 0 lvm /
nvme2n1 259:0 0 1.8T 0 disk
├─nvme2tb-nvme2tb_tmeta 253:4 0 15.8G 0 lvm
│ └─nvme2tb-nvme2tb-tpool 253:11 0 1.8T 0 lvm
│ ├─nvme2tb-nvme2tb 253:12 0 1.8T 1 lvm
│ ├─nvme2tb-vm--100--disk--0 253:13 0 4M 0 lvm
│ ├─nvme2tb-vm--100--disk--1 253:14 0 32G 0 lvm
│ ├─nvme2tb-vm--101--disk--0 253:15 0 128G 0 lvm
│ ├─nvme2tb-vm--103--disk--0 253:16 0 4M 0 lvm
│ ├─nvme2tb-vm--103--disk--1 253:17 0 127G 0 lvm
│ ├─nvme2tb-vm--105--disk--0 253:18 0 64G 0 lvm
│ ├─nvme2tb-vm--106--disk--0 253:19 0 128G 0 lvm
│ ├─nvme2tb-vm--106--state--AppRestore 253:20 0 24.5G 0 lvm
│ ├─nvme2tb-vm--109--disk--0 253:21 0 256G 0 lvm
│ ├─nvme2tb-vm--102--disk--0 253:22 0 64G 0 lvm
│ ├─nvme2tb-vm--102--disk--1 253:23 0 256G 0 lvm
│ ├─nvme2tb-vm--108--disk--0 253:24 0 64G 0 lvm
│ ├─nvme2tb-vm--115--disk--0 253:25 0 128G 0 lvm
│ ├─nvme2tb-vm--111--disk--0 253:26 0 128G 0 lvm
│ └─nvme2tb-vm--112--disk--0 253:27 0 128G 0 lvm
└─nvme2tb-nvme2tb_tdata 253:5 0 1.8T 0 lvm
└─nvme2tb-nvme2tb-tpool 253:11 0 1.8T 0 lvm
├─nvme2tb-nvme2tb 253:12 0 1.8T 1 lvm
├─nvme2tb-vm--100--disk--0 253:13 0 4M 0 lvm
├─nvme2tb-vm--100--disk--1 253:14 0 32G 0 lvm
├─nvme2tb-vm--101--disk--0 253:15 0 128G 0 lvm
├─nvme2tb-vm--103--disk--0 253:16 0 4M 0 lvm
├─nvme2tb-vm--103--disk--1 253:17 0 127G 0 lvm
├─nvme2tb-vm--105--disk--0 253:18 0 64G 0 lvm
├─nvme2tb-vm--106--disk--0 253:19 0 128G 0 lvm
├─nvme2tb-vm--106--state--AppRestore 253:20 0 24.5G 0 lvm
├─nvme2tb-vm--109--disk--0 253:21 0 256G 0 lvm
├─nvme2tb-vm--102--disk--0 253:22 0 64G 0 lvm
├─nvme2tb-vm--102--disk--1 253:23 0 256G 0 lvm
├─nvme2tb-vm--108--disk--0 253:24 0 64G 0 lvm
├─nvme2tb-vm--115--disk--0 253:25 0 128G 0 lvm
├─nvme2tb-vm--111--disk--0 253:26 0 128G 0 lvm
└─nvme2tb-vm--112--disk--0 253:27 0 128G 0 lvm
nvme1n1 259:1 0 931.5G 0 disk
├─nvme980--1tb-nvme980--1tb_tmeta 253:2 0 9.3G 0 lvm
│ └─nvme980--1tb-nvme980--1tb-tpool 253:6 0 912.8G 0 lvm
│ ├─nvme980--1tb-nvme980--1tb 253:7 0 912.8G 1 lvm
│ ├─nvme980--1tb-vm--107--disk--0 253:8 0 256G 0 lvm
│ ├─nvme980--1tb-vm--100--disk--0 253:9 0 150G 0 lvm
│ └─nvme980--1tb-vm--110--disk--0 253:10 0 256G 0 lvm
└─nvme980--1tb-nvme980--1tb_tdata 253:3 0 912.8G 0 lvm
└─nvme980--1tb-nvme980--1tb-tpool 253:6 0 912.8G 0 lvm
├─nvme980--1tb-nvme980--1tb 253:7 0 912.8G 1 lvm
├─nvme980--1tb-vm--107--disk--0 253:8 0 256G 0 lvm
├─nvme980--1tb-vm--100--disk--0 253:9 0 150G 0 lvm
└─nvme980--1tb-vm--110--disk--0 253:10 0 256G 0 lvm
nvme0n1 259:2 0 953.9G 0 disk
So, it made me curious about maybe I messed something up during the initial install, which would suck. I found this article: https://pve.proxmox.com/wiki/ZFS:_Switch_Legacy-Boot_to_Proxmox_Boot_Tool but I'm afraid of running anything as I don't want to lose my setup with VMs and Containers.
Proxmox isn't meant to be a storage server for me, it's main purpose was to be for VMs and containers and offload backups, etc to my Synology NAS, and I didn't think I needed ZFS during install as I was concerned about memory usage.
Anyway, I was hoping someone can help me figure out what I can do to solve the first issue here with the update-grub and whatnot so that I can continue to try and get SR-IOV working on my Intel T2-X550 ethernet cards.
Any advice would be greatly appreciated. Please let me know if you need any additional information.
Thanks,
AJ