Thanks - but here's the issue:
vgs
VG #PV #LV #SN Attr VSize VFree
data 1 1 0 wz--n- <3.64t 376.00m
pve 1 2 0 wz--n- <118.24g <70.68g
I already have a volume group called "pve" - and as I suspected, I can't renamed the 'data' one:
lvm vgrename...
Thanks for your help - but I'd like to do the opposite, and make the storage on this new node match the others.
Is that not possible from a logical level, if the underlying storage is instead on a different SSD to the pve/lvm volume?
To take a step back - how do I get it to behave the same as my other nodes - where I just went with the default of having both LVM and LVM-thin on the same disk?
Storage as the cluster level seems to want the volume group to be call "pve" everwhere? But what about the situation where the node...
it says "no such logical volume pve/local-lvm". Maybe it's just because the thin pool is called "data"?
But that happened when I created the thin pool.
I just added a new node to my cluster. Usually, I have both the LVM and LVM-thin storage on the same SSD.
On my new node I wanted to have a one small SSD for just proxmox root, and a second for the VM data. So what I've ended up with is:
nvme0n1 259:0 0 3.6T 0 disk...
Sorry, confused now. You're saying I can't create an ext4 file system on a thin volume?
I was proposing to do this (on the default 'data' thin pool Proxmox sets up):
lvcreate --type thin --name myvol --virtualsize 100G pve/data
mkfs.ext4 /dev/myvg/myvol
mount /dev/myvg/myvol /mnt/myvol
My...
Thanks
I'm not looking to move the VMs/LXCs away from LVM-thin storage.
What's confusing me is how I can have both that, as well as some raw ext4 space (on the same SSD) that I don't have to declare as only a certain size?
Unless the solution is simply to add another LV manually (on the...
I have a 4TB disk I'd like to use for VMs/LXCs, as well as just local storage (mostly to mount to the LXCs - as a form of shared storage).
Ideally I don't want to have to arbitrarily partition the drive, just dynamically allow either the VMs/LXCs use as much as they need, and the rest to be...
Thanks - I've posted on the thread in the second one you linked https://forum.proxmox.com/threads/opt-in-linux-6-5-kernel-with-zfs-2-2-for-proxmox-ve-8-available-on-test-no-subscription.135635/page-11#post-611280
Seems like it's not resolved, but not a massive issue for me as I don't really...
When you say "stuck", can you still SSH into the box, and access the web gui? I initially thought it was hanging, but realised it was just the console.
I've browsed through the suggestions in this thread, but looks like there's no solution yet?
After upgrading from 8.0 to 8.1, I only got a couple of log lines in the console after rebooting. It stops at the message about initialising ramdisk.
I therefore assumed it had hung, rebooted fine in a 6.2 kernel, and checked previous boot logs, only to find no errors.
I then rebooted with the...
Thanks for response.
1. The rombar differences were kind of random - neither actually needs it (08:00 is a UEFI GPU, 0a:00.3 is a UBS controller). In case somehow it did make a difference, I just tried, and I get the same results
2. Yes I specifically tried turning off the tablet pointer on...
Host: Proxmox 8.0.4, Ryzen 5700x, 64GB
Both Guest VMs: Windows 10, all latest updates. 32 GB assigned. Both with 'host' CPU type. Both same virtual SSD settings
When I run one of the VMs, and let it settle down with task manage open, I see it idling as expected. Looking at 'top' on the host I...
Came to say the same thing - it's a bit confusing when 7.x installer worked fine, and then with no hardware changed the installer appears to just hang at "loading drivers" stage.
Probably - although when on in BIOS, it enables some sub-options, including "Other PCI Device ROM Priority", which is set to UEFI only. That sounds like an attempt, at least, to ensure the UEFI firmware is still available even if in CSM mode.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.