To anyone in the future looking to set the default kernel version when using the grub bootloader, here's how:
Run the command 'grep menu /boot/grub/grub.cfg' and you'll get the following output:
root@hostname:~# grep menu /boot/grub/grub.cfg
if [ x"${feature_menuentry_id}" = xy ]; then...
I'm trying to get set up to analyze crash dumps on one of my Proxmox servers to attempt to identify the cause of an intermittent hard lockup issue. I have a test crash dump file but it appears I now need a debug version of the kernel to satisfy the "namelist" field to use the 'crash' tool. From...
Does using a virtio NIC on the VMs make any difference? I think that may have fixed it in my case but it's still somewhat too soon to tell.
Also, I think, although I could be wrong, that the E1000 NICs just use drivers included with Windows and don't have any applicable drivers included with...
I wish I could check that since it acts like that. Unfortunately due to the arrangement of these guests with GPUs passed through and the vnc display disabled from inside the guest, I can't actually run commands or check on anything from inside the guest without rebooting the guest into a...
I believe I might be running into the same issue. Somewhat randomly I'll have Windows guests lose network connectivity completely. I can't see inside the guest but running tcpdump on the VMs tap interface tapXXi0 (XX is VM ID) shows the guest sending repeated ARP requests for the gateway IP...
I believe I may be seeing a similar issue. Upon rebooting a guest Windows VM with GPU passthrough, I'm sometimes seeing the host lockup with soft lockup and hard lockup messages. Nothing is recorded in any log files that I can find though. Each time I have rebooted the host to restore operation...
Thanks for the input. I would think that since each GPU is on its own root port, there wouldn't be any iospace limitations but I haven't really dug into it a ton.
I did try what you suggested regarding specifying multiple pcie devices on one line but unfortunately it appears the Nvidia driver...
Yes. This is running 2 x Xeon E5-2667 CPUs. I don't believe this is a hardware problem.
In fact, for the time being I am running with 8 GPUs passed through using the usual hostpci0: 01:00.0,pcie=1 lines and the last two GPUs are passed through by adding args: -device...
Finding/making tricky problems is a great talent of mine :)
Yes, with 9 GPUs passed through and running the 'pci' command from the EFI shell, all GPUs do show up with PCI addresses 01:00.0 - 09:00.0, like you'd expect. There are also 9 PCI bridges (presumably the PCIe Root Ports) that show up...
Hello,
I have a Proxmox system with 11 x GTX 1080 Ti GPUs in which I am attempting to create a VM with all 11 GPUs passed through to the guest. I was excited to see that one of the listed features in Proxmox 6.1 was "PCI(e) passthrough supports up to 16 PCI(e) devices" but I have run into a...
As a follow up, setting the zfs_arc_max via the kernel cmdline didn't have any effect. Knowing that the zfs_arc_max functionality was working in a stock non-updated Proxmox 6.0 install, I tried downgrading libzfs2linux, zfs-initramfs, and zfsutils-linux to from 0.8.2 to 0.8.1. This also had no...
Check what "qm showcmd 100" gives you regarding cpu flags. Make sure you aren't specifying any flags in your manually added args line that aren't normally included by proxmox defaults. Different CPUs have different flags so if you call one that doesn't work with yours, it wouldn't surprise me if...
I haven't worked with Quadro cards but I had to set the CPU flag for hv_vendor_id=whatever as mentioned in this thread https://forum.proxmox.com/threads/modify-cpu-parameters.38686/.
Here is the line I have used before in the config file. I include a lot of the default flags from 'qm showcmd...
I'm fairly certain I'm seeing something similar to this. I just did a fresh install from the latest Proxmox 6.0.1 iso and when configuring /etc/modprobe.d/zfs.conf with a 1GB min and 2GB max, then running pve-efiboot-tool refresh, I experienced the following:
# cat /etc/modprobe.d/zfs.conf
#...
Now this is interesting... if I change the grub config to this the VM starts working just fine:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on hugepagesz=1G default_hugepagesz=2M"
hugeadm --explain shows that the VM is properly using the 1GB hugepages:
Mount Point Options...
Hello all,
I've spent the last couple of days trying to enable 1GB hugepages on one of my Proxmox nodes and I'm afraid I'm getting nowhere. The gist of my current roadblock is this:
kvm: -object...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.