Thanks
I'm not looking to move the VMs/LXCs away from LVM-thin storage.
What's confusing me is how I can have both that, as well as some raw ext4 space (on the same SSD) that I don't have to declare as only a certain size?
Unless the solution is simply to add another LV manually (on the...
I have a 4TB disk I'd like to use for VMs/LXCs, as well as just local storage (mostly to mount to the LXCs - as a form of shared storage).
Ideally I don't want to have to arbitrarily partition the drive, just dynamically allow either the VMs/LXCs use as much as they need, and the rest to be...
Thanks - I've posted on the thread in the second one you linked https://forum.proxmox.com/threads/opt-in-linux-6-5-kernel-with-zfs-2-2-for-proxmox-ve-8-available-on-test-no-subscription.135635/page-11#post-611280
Seems like it's not resolved, but not a massive issue for me as I don't really...
When you say "stuck", can you still SSH into the box, and access the web gui? I initially thought it was hanging, but realised it was just the console.
I've browsed through the suggestions in this thread, but looks like there's no solution yet?
After upgrading from 8.0 to 8.1, I only got a couple of log lines in the console after rebooting. It stops at the message about initialising ramdisk.
I therefore assumed it had hung, rebooted fine in a 6.2 kernel, and checked previous boot logs, only to find no errors.
I then rebooted with the...
Thanks for response.
1. The rombar differences were kind of random - neither actually needs it (08:00 is a UEFI GPU, 0a:00.3 is a UBS controller). In case somehow it did make a difference, I just tried, and I get the same results
2. Yes I specifically tried turning off the tablet pointer on...
Host: Proxmox 8.0.4, Ryzen 5700x, 64GB
Both Guest VMs: Windows 10, all latest updates. 32 GB assigned. Both with 'host' CPU type. Both same virtual SSD settings
When I run one of the VMs, and let it settle down with task manage open, I see it idling as expected. Looking at 'top' on the host I...
Came to say the same thing - it's a bit confusing when 7.x installer worked fine, and then with no hardware changed the installer appears to just hang at "loading drivers" stage.
Probably - although when on in BIOS, it enables some sub-options, including "Other PCI Device ROM Priority", which is set to UEFI only. That sounds like an attempt, at least, to ensure the UEFI firmware is still available even if in CSM mode.
Ok - I can answer my own question - somehow my BIOS flipped CSM support on (which I believe means legacy rather than UEFI boot).
Now I've turned that off again, I see the Tianocore boot screen again, and the "No more image in the PCI ROM" message is gone.
I don't really understand what that...
EDIT - Fixed turning of CSM support (somehow it turned on randomly in BIOS). However, still want to understand the behaviour
Original Post:
I've suddenly started noticing, when booting a Windows 10 VM, that I don't see the Proxmox/Tianco core boot screen.
I do however still see Windows once...
I usually run Ubuntu server in my LXC containers, but wanting to try Podman, which really only works well on a Redhat/Fedora base.
Even though this is working, when I run podman commands in the LXC guest, I see two errors/warnings in the host logs:
overlayfs: conflicting options...
@TheHellSite - just wanted to say thanks; this worked for me, with one niggle: The 'noauto' option means `mount -a` is ignored, which confused me.
To get around this do `mount <mnt dir>`
I'm seeing the same "BugCheck" log upon restart, however I only noticed in passing. It doesn't actually seem to cause a problem.
Did you find out what's wrong?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.