[SOLVED] GMKtec NucBox K11 / AMD 8945HS 780M GPU Passthrough — Complete Working Solution
After extensive debugging I got full AMD Radeon 780M (Phoenix3, 1002:1900) passthrough working on Proxmox 9.1.9 — including surviving Windows restarts...
Those are on the cheaper and slower side of consumer SSDs. They will not perform well with sustained load and the primarily sync writes that Ceph does.
The recommendation for enterprise SSDs with power loss protection (PLP) is there for good...
I don't run nested virtualization using Windows guests but do with Linux guests especially with Proxmox, ie, phystical Proxmox -> vitual Proxmox -> Linux guest.
I do use CPU type of 'host' as your screenshot shows. As for number of cores, I see...
AFAIK you have a few options here. Firstly attach that media drive & set it's PVE storage settings EXACTLY as it was previously.
Then do ONE of the following:
1. Recreate an LXC with the same <ctid> as before (101) . I believe you will have to...
Theoretically you could add it with a block to senderaccess.
But senderaccess is overwritten by proxmox regularly and you can only add whitelist domains via PMG Mailproxy -> Welcomelist
You have to use the Mail Filter -> Who Object -> Blocklist...
Worst case scenario I can also just set it up as two separate clusters, with the problem node being separate from the other two. Not ideal, but at least an option
@leesteken i've been cracking my head all day at why my vega10 wouldn't run correctly on proxmox when everyone else seemed to have such a turnkey experience to PCIe passthrough now (compared to last time i did it... nearly 10 years ago). Your...
local-zfs uses the zfspool backend, so VM disks are stored as ZFS volumes/datasets in raw format. Not being able to select qcow2 there is expected.
Snapshots on local-zfs are not qcow2 internal snapshots; they are provided by the ZFS storage...
thats an understatement.
the crucial bx series is one of the worst performing ssds i have ever seen, even on clients.
you may use it as cold storage, but anything warm or hot will perform terrible on it.
more so if its used with zfs/ceph.
even...
Hi,
We have seen intermittent reloads of cluster nodes on a 3-node cluster with the default token_coefficient of 125. Another cluster without this setting, with identical hardware and setup, has been running without issues for several months...
I am using https://git.zabbix.com/projects/ZBX/repos/zabbix/browse/templates/app/proxmox?at=release/7.4
To make it work you need to follow the documentation on that page ;-)
I've set only {$PVE.TOKEN.ID} {$PVE.TOKEN.SECRET} and {$PVE.URL.HOST}...
This is not a PVE, or even Linux/SLES problem.
This is linux troubleshooting 101.
You have a hanging task on the 2nd partition (presumably root) which gives you two vectors of troubleshooting-
1. host bus- change the host bus for your root...
If it were me, I would use it to confirm that the hardware is broadly compatible (for example SCSI controllers, graphics adapters, and similar devices). It could also help identify which kernel modules are being loaded and potentially highlight...
Thanks @Neobin that was helpful. I was able to create the below script with the help of AI.
You can change "*/15" to adjust the schedule to your liking and add --rate to prevent storage/network bottlenecks, ie: --rate 20 (20 MB/s)
#!/bin/bash
#...
While these threads are storage-slanted, the feedback it the threads is from companies running successful businesses, almost exclusively on Proxmox Virtualization Environment...
They have some success stories on their page:
https://www.proxmox.com/en/about/about-us/stories
There was also a survey on reddit:
https://www.reddit.com/r/Proxmox/s/aGMBgJqm0L