Thanks @leesteken, I've tried machine pc-q35-7.2 but without success, 3D acceleration still working with Windows driver for RX 6600 but I can't install AMD Adrenalin driver (AMD hardware is not recognised).
Hi,
Haven't had a response yet. Should I consider that it is common not to be able to install the AMD Adrenalin driver with GPU passthrough?
Kind regards,
YAGA
Hi there,
My setup: Asrock B550 ITX/AX with Ryzen 5950x + Asus RX 6600 with PVE 8.1.4 and Windows 11 with GPU passthrough.
Windows 11 device manager recognizes AMD Radeon RX 6000 and 32-core AMD 5950x (Qemu with CPU host and numa=1 parameters.
Everything seems to work properly with 3D...
Hi,
Sorry @sb-jw for not responding sooner and many thanks for your input.
Based on your suggestion I have written a one-line command to convert VMID number to host IP, but I don't like my code.
9999 is the VMID and the result is the host IP.
pvesh get /nodes/`pvesh get /cluster/resources...
Hello,
In a cluster with HA, a VM that was created on a node may have changed nodes.
What is the simplest way to destroy a VM by its VMID from CLI when you do not know on which node it was automatically migrated by HA?
I haven't found anything simple with just one command line
Regards,
YAGA
Hi there,
I've 4 Samsung 990 Pro 4TB with heatsink attached to 4 motherboards (one Samsung 990 Pro on each MoBo) and I've exactly the same issue.
3 Samsung 990 Pro are running properly and one Samsung 990 Pro "disappears" some time after boot with more or less the same error messages.
nvme...
Hi Fiona, Hi Lukas,
@fiona Very good point, I run qm destroy <ID> --purge from the CLI very frequently.
The bug actually happens after a qm destroy <ID> --purge
I will update PVE by the end of the week but I am confident that updating libpve-common-perl to 8.0.9 will resolve the issue...
Hi Lukas,
Cluster is up and running since PVE 7. My first install was with PVE 7 but I don't know exactly the subversion.
# cluster wide vzdump cron schedule
# Automatically generated file - do not edit
PATH="/usr/sbin:/usr/bin:/sbin:/bin"
Kind regards,
Good evening,
The bug with the /etc/pve/jobs.cfg file has occurred again, the jobs.cfg file no longer contains the information for backups except :
...
vzdump: backup-a72814b2-6a82
enabled 1
repeat-missed 1
schedule Sat 00:00
vzdump: backup-09b8f777-8757
enabled 1
repeat-missed 1
schedule...
Hi there,
The problem recently emerged on a 4-node cluster using different filesystems (PBS with multiple datastores, Ceph, and several NFS volumes) for backups.
I am using the PVE Community edition with the latest updates available to date, i.e., 8.0.4.
Suddenly, there were no backups, and...
Hi,
I'm using a 4-node cluster with Ceph (PVE 7.3.4 and Ceph 17.2.5) and 12 HDD OSD (3 OSD per node).
The Ceph network is a dedicated 10 GbE network for this 4-node cluster.
More or less one year ago, with previous versions, CephRBD and CephCephFS were working properly : fast and usable...
Hi,
I've to reinstall a node on a ceph cluster because nvme boot partition has been damaged but the partition with VM and LXC disk images is ok.
I've added a new nvme disk for a fresh install proxmox and I previously renamed pve volume (with VM and LXC disk images) with 'vgrename pve pveold'...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.