Ok - I can answer my own question - somehow my BIOS flipped CSM support on (which I believe means legacy rather than UEFI boot).
Now I've turned that off again, I see the Tianocore boot screen again, and the "No more image in the PCI ROM" message is gone.
I don't really understand what that...
EDIT - Fixed turning of CSM support (somehow it turned on randomly in BIOS). However, still want to understand the behaviour
Original Post:
I've suddenly started noticing, when booting a Windows 10 VM, that I don't see the Proxmox/Tianco core boot screen.
I do however still see Windows once...
I usually run Ubuntu server in my LXC containers, but wanting to try Podman, which really only works well on a Redhat/Fedora base.
Even though this is working, when I run podman commands in the LXC guest, I see two errors/warnings in the host logs:
overlayfs: conflicting options...
@TheHellSite - just wanted to say thanks; this worked for me, with one niggle: The 'noauto' option means `mount -a` is ignored, which confused me.
To get around this do `mount <mnt dir>`
I'm seeing the same "BugCheck" log upon restart, however I only noticed in passing. It doesn't actually seem to cause a problem.
Did you find out what's wrong?
Has something changed here? I've completely wiped my Proxmox system and started again - and amazingly it all just works.
It's hard to tell what the changes are in https://git.proxmox.com/?p=pve-kernel.git;a=summary
I just upgrade my CPU from a Ryzen 5700G to a 5900X, and noticed the idle frequency rocketed from 900MHz to 2200MHz.
As far as I know, on the 5700G the 900MHz was even there with default Proxmox settings, which I believe is "performance" (though I don't know where that's set).
Anyway, I tried...
That's not the official code/repo - I wouldn't trust a random mirror on GitHub.
Also - the 'proper' driver, as I say, has been upstreamed.
Finally, I did try to build from Realtek source, and it made no difference.
Thanks for your help @LnxBil - sorry I had to focus on something else.
The output of numastat:
numastat
node0
numa_hit 122712713
numa_miss 0
numa_foreign 0
interleave_hit 3105
local_node...
Ah, sorry, I'm confusing things - actually I tried three ways:
1) Windows host
2) Proxmox host, with tested drive a vfio scsi drive on the nvme I'm interested in (no caching)
3) Proxmox host, with nvme drive passed through.
In case 1 and 2, 4k random writes with 32 queue depth are good. In the...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.