Thanks @chris. However, that'll trigger the removal of grub-pc, is that correct?
root@pbs:~# apt install grub-efi-amd64
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following packages were automatically installed and are no longer...
I just updated 2 PBS's to the latest versions, enterprise repo. Both machines fail with these lines as the last ones:
update-initramfs: Generating /boot/initrd.img-6.8.4-2-pve
W: No zstd in /usr/bin:/sbin:/bin, using gzip
Running hook script 'zz-proxmox-boot'..
Re-executing...
Not sure if the switch will help you on the IOPS side, but if you want new.. Arista. For used I can highly recommend Mellanox. You can get SN2410's for around $ 1500 on ebay (if you're lucky) giving you 48x25 gbit plus 8x100 gbit, very low latency.
We've been running these in production for...
Looking at the logs I'd say this could have something to do with it:
Jul 20 12:01:33 PVE kernel: pcieport 0000:00:1b.0: AER: Corrected error received: 0000:01:00.0
Jul 20 12:01:33 PVE kernel: nvme 0000:01:00.0: PCIe Bus Error: severity=Corrected, type=Physical Layer, (Receiver ID)
Jul 20...
I have some other issues with the 6.2 kernel and hardware. I see people downgraded to the 5.15 kernel. How does one get that on a Proxmox 8 system? This was a clean install, not an upgrade from 7.x.
I installed PVE 7.4 directly from ISO. Added an enterprise license to it and did an update/upgrade through the interface. Now I need to install vlan & ifenslave. Both end up in:
root@pve03:~# apt install vlan
Reading package lists... Done
Building dependency tree... Done
Reading state...
First you need to identify your bottleneck. Using tools like top, and more importantly, take a look at iostat while it's running. Run iostat -x 1 to see how heavy your disks are loaded. My bet is that iowait is high while %util on the spinning disks is at 100%.
AFAIK you can't specify backup/storage interface. If you can.. adjust the IP range you're using for storage to be different than management and then simply connect over the non-management IP range.
Don't forget to adjust migration settings to use the storage network as well, under Datacenter ->...
We've had issues with LLDP and there cards, turned out the card had a LLDP client in it's firmware. Disabling that solved a lot of weird issues like packetloss or in some cases simply not working.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.