OK, I've found the trigger for this problem. This happens when I pass through the SATA Controllers on my WTR Pro in ProxMox. FWIW, I am running kernel 6.11.11-1-pve but the same occurs on 6.8 also.
When I stop all VMs, removed the config that...
After speaking with Aoostar directly, it seems the SATA Controllers are integrated into the CPU rather than being on a separate controller chip. This means when passing through the Controllers, part of the SoC becomes locked out to the Host.
My...
Ok maybe I am not following fully, I have detached the disk from the VM so that it now shows up in the hardware list and is listed as Unused Disk 0. When I go and remove the VM, enter the ID to confirm the Purge and Destroy options are unchecked...
I face this issue when booting in BIOS/Legacy mode, but it works fine when booting in UEFI mode. Here is my XML.
Before trying this, download a new ISO it’s possible your current image is old, and the new one may include a patch.
Note: I’m using...
is it possible at all to delete a VM but keep a disk that was attached to the VM? Lets say I have a data disk assigned to a VM but want to blow that VM away and reconfigure it but keep the data disk around for the reconfigured VM. I do not see...
The devil surely is in the details ;)
The little pin which made the castle crumple was the following:
A mount flag on the root partition which apparently triggers a regression in systemd-remount-fs.service. It fails if the following mount flag is...
I've had the unit since early Sept, it's fully loaded with 11 drives and using the 10Gbe networking. Running Windows 11 VM and a TrueNAS server.
The only times it's been down is when I'm messing around with the configuration, but even that...
I can say that the WTR MAX is almost the perfect Proxmox node, lots of power in every aspect needed in a Proxmox node. I wish I could afford a 3 unit cluster.
And Aoostar, although being relatively new to international sales, are fairly good...
Assumption: each NIC is assigned in a bridge. That's default and OK.
There are several solutions for your wish, for example limiting "Listen" for the relevant processes or establishing iptables rules.
But approach "zero" is by far the...
I have an existing cluster with two nodes and standard install, i.e., a single entry for local-zfs in storage.cfg with identical definition (ZFS pool/dataset named rpool/data on each node). Replication works without issue.
Now however, I have a...
It isn't :). From the VM's point of view, it's a disk. The VM doesn't know that's a LV and it doesn't care.
In the VM you'll shrink the filesystem which during the normal work is mounted at /mnt/ServerData but during the live image session will...
I have tried these kernel also with same poor result:
proxmox-kernel-6.14.8-2-pve-signed
proxmox-kernel-6.14.11-5-pve-signed
proxmox-kernel-6.17.2-1-pve-signed
BTW. AMD Opteron(tm) Processor 3365 does support aes:
lscpu |grep aes
Flags...
Apologies for reviving an old thread, but could you elaborate on the second half of #1? If you setup a btrfs RAID array during install (we'll say RAID 10 for this example), are you then stuck with both your proxmox install and your VM's/etc on...
Hello good evening, I read somewhere that aes is needed in the new Proxmox ve 9. I run some Intel LGA 1150 CPU generation 4, like intel core i7-4770 the work fine.
So you is too old for the new Debian 13 Trixie, maybe you will check this with...
Hi all,
Trying upgrading my server from 8.4 to 9.1 but after the upgrade the server fails somehow and starts the server with root in ro mode. Starting same server with the 8.4 kernel and everything works as expected.
Old...
Thanks.
sdb is from a different VG on a different disk (external RAID1). This is earlier on the client when it was still running:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 32G...