My Minisforum MS-01 finally arrived after 4 months so I've spent nearly 24 hous trying to see if I can get iGPU passthrough to work, no luck so far. I'll detail my settings and results so far and see if anyone can comment. Hopefully this thread will end up with a working example that can apply...
I have a small 3 node cluster with vmbr0 allocated 192.168.1.0/24 host node IPs. From that LAN network I can reach all internal VM/CTs on each of the 3 host nodes. Fine. Now I have 2 OpenWrt CTs on two different nodes with a pair of Debian CTs also on each of those two nodes. I added a "blank"...
PVE 8.1.3 host on a Minisforum UM780 XTX (AMD Ryzen 7 7840HS w/ Radeon 780M Graphics) with a fairly standard iGPU passthrough setup. The guest is Manjaro/KDE w/ BIOS OVMF, Display none, Machine q35, hostpci 0000:c5:00.0,pcie=1 (+ rombar and all functions). If I enable Display SPICE and disable...
I have a subnet of IPs that I allocate to a dozen different domains, and right now I use a combination of master.cf rules and sender_dependent_default_transport_maps = lmdb:/etc/postfix/sender_transport to send out mail from different customer virtual domains via different IPs. If I put PMG in...
Proxmox 8.0.4. I have a 4 node cluster and all of them upgraded successfully except this one with the error below. /var/tmp/espmounts/67D6-E50C/ does exist but not EFI. Any suggestion how to find out why there is no space left and how to fix it?
run-parts: executing...
I imported a ZFS pool on a pair of 8TB disks and added it as storage attached to one host node. This pool seems to have a single dataset...
~ zpool list pbs2
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
pbs2 7.27T 4.64T 2.62T - - 1%...
I probably misunderstand how to set this up, but I have two PVE VMs, one long time Ubuntu VM with postfix/dovecot acting as my local public facing mailserver on my LAN. I've tried 3 times to set up pmg-api/7.3-3/a3d66da0 (running kernel: 5.15.107-2-pve) and follow whatever guides I can find to...
I've added ipv6.disable=1 to /etc/kernel/cmdline and rebooted, it shows up in /proc/cmdline and sure enough ip a does not show any ipv6 interfaces. However, now when I try to reboot a VM I am seeing an endless stream of these lines in the host logs and the Proxmox Mail Gateway VM won't reboot...
I've got a private 4 node homelab cluster, and I'd like to set up a public cluster at a hosting provider. The problem is that they can only BGP route a /28 network to a single node, so I am struggling to understand how best to take advantage of those public IPs within a 3 node (or more) cluster...
I've got a LXC container running Kodi on Ubuntu via lightdm, but without a window manager. I've also tried a container with accelerated GPU for JellyFin and that also seems to work (vaapi to Intel iGPU) but that may only be using GL or part of Xorg. The Kodi CT is indeed using most or all of...
I just did a cli update and now pveproxy is not running on the ipv4 interface. Any suggestions?
~ netstat -tanup | grep 8006
tcp6 0 0 :::8006 :::* LISTEN 4503/pveproxy
~ pveversion
pve-manager/7.0-13/7aa7e488 (running kernel...
I have a small homelab and only need a few VMs so I have 1/2 dozen TB of storage going to "waste" and using up electricity. Is there any possibility of being able to us PBS as a "normal" backup server as well as for PVE based VMs and CTs?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.