Managed to solve this due to a helpful answer on Reddit:
Had top open Storage settings, double click the NFS hosting VM disks and change Pre-allocation from Default to Off. Now migrating a pre-existing VM disk to RAW and back to QCOW2 resulted in a 28gb of space use on NFS instead of 128GB and...
I've recently finished migrating my VMs from an old Proxmox 6 host to a new one running fully updated Proxmox 7. I got myself a NAS serving NFS that I used to create the backups with the idea to later also use it for some VM disks that do not require high responsiveness of local nvme drives.
I...
Figured it out and finally got it to work the way I wanted.
Destroyed everything iscsi-related first, including the virtual disks on the VM running the targets. Created new nvme-backed and sata-backed virtual disks, made sure discard=on and ssd=1 for both in disk settings on Proxmox. Created...
I have a Proxmox 6 host that has a ZFS mirror consisting of 2 NVME drives and 4 spinners in a "raid10-like" ZFS configuration (a stripe of 2 mirrors)
It has 1 Linux server VM acting as an iSCSI target. The VM has an OS drive, a datadrive thats on the NVME pool and a datadrive thats on the...
Am I understanding it correctly that proxmox-boot-tool is essentially the Proxmox alternative to Ubuntu's usage of separate "bpool" and "rpool" created by the official installer, with Ubuntu bpool being locked to GRUB-compatible features and blocked from having it's version upgraded?
Huh, this is weird. I was certain the host was originally installed on a 2x2tb mirror, but a deep dive confirmed that it's the 3tb drives holding the GRUB boot partitions sda1 and sdb1.
Looking at the Troubleshooting part of https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0 before doing the upgrade, as having an older BIOS-based install on ZFS, I understand I need to https://pve.proxmox.com/wiki/ZFS:_Switch_Legacy-Boot_to_Proxmox_Boot_Tool first. However:
NAME...
So I just took 2 NVME SSDs and via the Proxmox UI I made them into a ZFS mirror and added to available storage. I then migrated one of my VM disks to the new pool and am now baffled by the result: https://pastebin.com/JuKbEzdP
Why is "used" for the exact same disk 30.3G on the original pool and...
Hello
I am running Proxmox 5.4 on a Dell T110 ii with 32gb of ram and 4 SATA drives in a ZFS raid10-like configuration. I am experiencing host crashes/reboots when a VM is under heavy write load such as running CrystalDiskMark benchmark inside a Windows Server 2019 VM. At first, I thought this...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.