This is pretty troubling, I have tried to install on a R720, and now R730 also fails with 7.1-1 iso image. I am booting the iso via the iDrac IPMI, click on Install Proxmox and on the next screen it fails:
waiting for /dev to be fully populated...| ACPI Error: No handler f0331/evregion-130)...
Just testing a Win10 VM, hotplug/numa enabled, running guest agent. I added 4096 to the existing 4096 ram and it fails. Trying to add a CPU core, it just adds it and waits for a reboot to apply the new CPU.
Parameter verification failed. (400)
memory: hotplug problem - VM 110 qmp...
Strangely this time booting up set to 5.2 it worked without error.
-id 100 \
-name PLserver \
-chardev 'socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait' \
-mon 'chardev=qmp,mode=control' \
This is a brand new installation, I did not change the machine type, and I made 3 VMs that all had the same problem.
proxmox-ve: 6.3-1 (running kernel: 5.4.103-1-pve)
pve-manager: 6.3-6 (running version: 6.3-6/2184247e)
I just loaded up the latest 6.3.1 and ran dist-upgrades to the very latest on a Dell R730, when I try to start a new Win2019 VM with the bare basic defaults selected it fails to start with error:
() kvm: no-hpet: unsupported machine type
It seems the default machine type 5.2.0 is the issue...
I have 3 pools:
VMdata: Main pool with zvols for VMs, daily snapshots running here
Backup: Backup pool where snapshots get replicated (Sanoid)
Pool3: Other empty pool
I want to clone a VM from snapshots on the Backup pool to Pool3, without affecting anything on the main pool. I want to spin up...
Are they 2.5 laptop drives?? Even if they were *higher end* desktop drives, theres a reason why most servers have SAS ports, just skip right to a decent SSD, look for something that was common on Dells, avoid "read optimized SSDs" if possible, there are many decent 2nd hand Dell Toshiba SAS SSD...
Actually, things have come a long way thanks to PVE dev team, all of that can now be done in the gui in just 1 step.
That is it, you are done....
If there is no available disk in the drop down you may need to wipe it, verify the serial number before plugging in the disk, verify its letter and...
That works - YEA!!!, but that is contradicting. I am not allowed to use zfs *directory* storage for CT, but the mount *directory* matters??
Maybe consider this a low priority adjustment in future code.... something like zfs get mountpoint $poolname in the pve code prior to sending a CT to...
I tried that:
root@pve1:~# zfs set mountpoint=none VMdata1
root@pve1:~# zfs mount VMdata1
cannot mount 'VMdata1': no mountpoint set
So the pool no longer has a mount point, and it is not listed as a mounted fs, but still same error when trying to send a CT to the pool with double // in front of...