Final update:
I think I nailed down the reason, and it was indeed mentioned in the wiki which I missed.
https://pve.proxmox.com/wiki/Pci_passthrough#The_.27romfile.27_Option
For some reason, even with the same system that worked in the past (I have done pass through on all slots with GPUs), I...
I have a quick update:
Today I was trying to upgrade another system running also Proxmox 6.4 with Kernel being 5.4.
I was just trying to pass a card (vega FE) that I have done before (same system, same Proxmox 6.x maybe a bit older, around 1 year ago), I realized I couldn't and I saw the same...
Hi,
Recently I purchased a new GPU for numerical work for my workstation.
It was installed roughly 1.5 year ago and has been running PVE 6.4, running on a AMD Threadripper 3970X, with PSU being Corsair AX1600i.
Here is the kernel version:
Linux polaris 5.4.189-1-pve #1 SMP PVE 5.4.189-1 (Wed...
Update on 2022/01/08: Looks like the system SSD is indeed failing.
I have formatted the SSD and tried to put the latest Proxmox onto it.
As I slowly restoring from my backups, the system froze again and threw these errors before it died out.
Likely it's time to replace it.
Jan 08 21:36:37 csqt...
I tried to use smartctl to do test, but it seems it does not work for testing NVMe drives.
Moreover, I was comparing the output of my Proxmox systems with nvme-cli:
The problematic system shows this (see 512.11 / 512.11 GB):
root@pve:~# nvme list
Node SN Model...
I am running Proxmox 6.4-13. Lately I am experiencing system freezing, in particular before it totally freezes out, it will display input/output error when doing df/nano/lsblk. I ruled out all add-on disks, leaving the system SSD to blame.
Besides it being a hardware failure, I remember seeing...
Thanks avw and t.lamprecht, this is exactly the reason!
Upon inserting the GPU back, the name of the interface changes number as enpXs0 with different X.
Using the said method I am able to lock the NIC's name to eth0 and now removing the GPU it becomes a happy headless server!
PS: Is this a...
This may not be a Proxmox issue as it could just be the motherboard, but here is my observation which I wonder if you have any clues.
I am trying to boot Proxmox 7.0-11 headlessly after the initial installation.
With the GPU still attached, everything works normally and I could access the...
I recent got a new build for using PVE, with the key components:
AMD 2700X
Gigabyte X570S UD
Vega 56
PVE 7.0-11
After the installation, before setting up any VM, I installed lm-sensor and checked the temperature of various items.
The GPU shows 44C and is actually only warm to touch. This temp...
I updated Proxmox via the GUI today, after a reboot I realized my LXC wireguard client can no longer connect to a wireguard server as usual.
The journal shows the below:
Jul 11 16:35:00 CSWG2 wg-quick[334]: [#] ip link add wg0 type wireguard
Jul 11 16:35:00 CSWG2 wg-quick[341]: Error: Unknown...
Sorry for maybe hijacking the question, after the passthrough, how should the drivers be installed?
For example with AMD, it seems now we need to install ROCm. Could this be done on the host, and maybe again inside the LXC?
This seems to be the minimum requirement for utilizing say OpenCL.
Thanks for the follow-up, I believe it was because the drives were being filled up due to write amplification.
I further cut down the assigned size and it never happened again later; though now I have just passed the drives directly to the VM.
I am setting up one VM to do plotting for Chia, it has the following disks:
-3 pass-through 1TB SSDs
-5 SATA 400GB SSDs as RAID0 in Proxmox, used as a directory and is assigned as virtual hard disk to the VM (volume set was 1800 GiB, after checking df -BGiB in Proxmox shell)
-1 PCI-E RAID card...
With PVE 6.4-6 and ubuntu 20.04.2 as guest VM, I am trying to use memory ballooning.
The system has 64 GB RAM and at the moment only 3 OS are running, 2 are using in total < 2 GB RAM, so most RAM is available for the ubuntu VM.
I set the VM to have 8 GB minimum, and 48 GB as memory, with the...
Thanks for the suggestion, indeed GNOME-DiskUtility which I used was the problem.
Using KDiskMark I got the same result as AS-SSD on Windows, problem resolved!
I have a guest ubuntu 20.04.2 where I tried to pass two Kingston A2000 1TB to, this is done by passing them as PCI Device, with PCI-E flag checked.
The advertized speed is up to 2000 MB/s write.
With the drives empty, I found while read speed is 2000 MB/s as expected, the write speed is limited...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.