Win10 GPU Passthrough

Harald00

Active Member
Sep 17, 2018
11
0
41
31
Hallo,

ich scheitere aktuell am GPU Passthrough einer GT720 an eine Win10 VM

Danach bin ich vorgegangen: https://pve.proxmox.com/wiki/Pci_passthrough

Installiert habe ich die VM mit OMVF Bios und virtio Treibern (Bei der Installation Ballon, NetKVM und viostor hinzugefügt.

Windows lief und ich habe wie oft empfohlen RDP aktiviert und eine feste IP vergeben. Konnte über RDP auch drauf zugreifen.

IOMMU ist aktiviert und enabeld, remapping, funktioniert ebenfalls (Bei der Ausgabe ein "a" am Ende) Verifizierung ging ebenfalls gut.

Mit lspci -v zeigt er auch bei der GPU und dem Audio Device als Treiber den vfio-pci an.

Nun folgendes Problem: Als erstes hatte ich nur ein Seabios, da hat das ganze nicht so geklappt. Allerdings hatte ich keine Probleme mit dem Host selbst.

Als ich bei der OMVF Installation dann die GPU durchgereicht habe, konnte ich mich nach dem Start nicht mehr per RDP verbinden. Er hat allerdings auch keine Fehler angezeigt.

Im Router war aber plötzlich ein neues Gerät und das Windows hat eine neue IP bekommen, was mich schon verwundert hat. Damit konnte ich mich dann verbinden, es wurde aber ein neuer Ethernet Controller hinzugefügt?!

Als ich dann die Grafikkarte aus der Config entfernen wollte, hat sich das ganze PVE aufgehangen und ich musste es komplett neustarten.

Im Monitor bei Info Pci wurde die Grafikkarte ebenfalls erkannt im Windows selbst aber nicht.

ich Installiere gerade nochmal ein neues Windows, und versuche das Problem nochmal zu reproduzieren.#
Mich wundert es nur, dass sich der ganze Host aufgehangen hat, als ich die Config geändert habe.

Meine Config:


balloon: 3072
bios: ovmf
bootdisk: virtio0
cores: 2
efidisk0: local-zfs:vm-100-disk-2,size=128K
ide0: local:iso/virtio-win-0.1.141.iso,media=cdrom,size=309208K
memory: 4096
name: Win10
net0: virtio=7E:6C:C9:DF:77:54,bridge=vmbr0
numa: 0
onboot: 1
ostype: win10
scsihw: virtio-scsi-pci
smbios1: uuid=15851f33-421b-4b3e-91fb-f84b8c0a1c1d
sockets: 1
vcpus: 2
virtio0: local-zfs:vm-100-disk-1,cache=writeback,size=32G
hostpci0: 01:00,x-vga=on​



Ich gehe davon aus, dass ihr noch weitere Infos braucht, ich weiß allerdings nicht was, daher müsstet ihr mir das bitte sagen.

Gruß

Edit: Hardware: Supermicro X11SLL-F, Xeon E31220 v6, 16 GB ECC RAM, auf 2 SSDs als ZFS RAID1

EDIT2:

Bei der neuen Maschine passiert nach dem ändern der config gar nichts mehr. VM fährt zwar hoch aber kann nichts machen, bekomme auch keine "Netzwerkverbindung" PCI wird aber erkannt. Herunterfahren brachte ein Timeout und musste die Maschine per Shell stoppen.

EDIT3:

Konfig nochmals abgeändert, "machine q35 und host pci0: 01:00,pcie=1,x-vga=on

VM fährt hoch bekommt, allerdings wieder eine neue Netzwerkkarte und neue IP, die alte wird nicht mehr angezeigt. (Ethernet 2) findet aber keine Grafikkarte.

Ich habe die Grafikkarte mal an ein Win7 durchgereicht, das funktioniert ohne Probleme. ist auch ein seabios.
 
Last edited:
was sagt:
Code:
lspci -nnk
?
 
Dort wurden ebenfalls die vfio Treiber angezeigt.

Nachdem ich es bei Win7 erfolgreich getestet habe, habe ich das ganze einfach nochmal bei dem Win10 getestet und es hat funktioniert. Zumindest hat er gebootet, und ich konnte diesmal die GraKa Treiber installieren.
Nach Abschluss auf "jetzt neustarten" geklickt und er hat neugestartet.

Dann habe ich lspci -nnk ausgeführt und kurz danach ist wieder der Host abgekackt. (liegt denke ich eher an der VM) Aber warum schmiert pve dann ab?

Über IPMI zeigt er normal den Login an, ich kann aber nichts machen.

Hier die Ausgabe von lspci -nnk

00:00.0 Host bridge [0600]: Intel Corporation Device [8086:5918] (rev 05)
Subsystem: Super Micro Computer Inc Device [15d9:089a]
Kernel driver in use: ie31200_edac
Kernel modules: ie31200_edac
00:01.0 PCI bridge [0604]: Intel Corporation Skylake PCIe Controller (x16) [8086:1901] (rev 05)
Kernel driver in use: pcieport
Kernel modules: shpchp
00:13.0 Non-VGA unclassified device [0000]: Intel Corporation Sunrise Point-H Integrated Sensor Hub [8086:a135] (rev 31)
Subsystem: Super Micro Computer Inc Sunrise Point-H Integrated Sensor Hub [15d9:089a]
00:14.0 USB controller [0c03]: Intel Corporation Sunrise Point-H USB 3.0 xHCI Controller [8086:a12f] (rev 31)
Subsystem: Super Micro Computer Inc Sunrise Point-H USB 3.0 xHCI Controller [15d9:089a]
Kernel driver in use: xhci_hcd
00:14.2 Signal processing controller [1180]: Intel Corporation Sunrise Point-H Thermal subsystem [8086:a131] (rev 31)
Subsystem: Super Micro Computer Inc Sunrise Point-H Thermal subsystem [15d9:089a]
Kernel driver in use: intel_pch_thermal
Kernel modules: intel_pch_thermal
00:16.0 Communication controller [0780]: Intel Corporation Sunrise Point-H CSME HECI #1 [8086:a13a] (rev 31)
Subsystem: Super Micro Computer Inc Sunrise Point-H CSME HECI [15d9:089a]
Kernel modules: mei_me
00:17.0 SATA controller [0106]: Intel Corporation Sunrise Point-H SATA controller [AHCI mode] [8086:a102] (rev 31)
Subsystem: Super Micro Computer Inc Sunrise Point-H SATA controller [AHCI mode] [15d9:089a]
Kernel driver in use: ahci
Kernel modules: ahci
00:1d.0 PCI bridge [0604]: Intel Corporation Sunrise Point-H PCI Express Root Port #9 [8086:a118] (rev f1)
Kernel driver in use: pcieport
Kernel modules: shpchp
00:1d.1 PCI bridge [0604]: Intel Corporation Sunrise Point-H PCI Express Root Port #10 [8086:a119] (rev f1)
Kernel driver in use: pcieport
Kernel modules: shpchp
00:1d.2 PCI bridge [0604]: Intel Corporation Sunrise Point-H PCI Express Root Port #11 [8086:a11a] (rev f1)
Kernel driver in use: pcieport
Kernel modules: shpchp
00:1f.0 ISA bridge [0601]: Intel Corporation Sunrise Point-H LPC Controller [8086:a14a] (rev 31)
Subsystem: Super Micro Computer Inc Sunrise Point-H LPC Controller [15d9:089a]
00:1f.2 Memory controller [0580]: Intel Corporation Sunrise Point-H PMC [8086:a121] (rev 31)
Subsystem: Super Micro Computer Inc Sunrise Point-H PMC [15d9:089a]
00:1f.4 SMBus [0c05]: Intel Corporation Sunrise Point-H SMBus [8086:a123] (rev 31)
Subsystem: Super Micro Computer Inc Sunrise Point-H SMBus [15d9:089a]
Kernel driver in use: i801_smbus
Kernel modules: i2c_i801
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK208 [GeForce GT 730] [10de:1287] (rev a1)
Subsystem: Micro-Star International Co., Ltd. [MSI] GK208 [GeForce GT 730] [1462:8a9f]
Kernel driver in use: vfio-pci
Kernel modules: nvidiafb, nouveau
01:00.1 Audio device [0403]: NVIDIA Corporation GK208 HDMI/DP Audio Controller [10de:0e0f] (rev a1)
Subsystem: Micro-Star International Co., Ltd. [MSI] GK208 HDMI/DP Audio Controller [1462:8a9f]
Kernel driver in use: vfio-pci
Kernel modules: snd_hda_intel
02:00.0 Ethernet controller [0200]: Intel Corporation I210 Gigabit Network Connection [8086:1533] (rev 03)
Subsystem: Super Micro Computer Inc I210 Gigabit Network Connection [15d9:1533]
Kernel driver in use: igb
Kernel modules: igb
03:00.0 Ethernet controller [0200]: Intel Corporation I210 Gigabit Network Connection [8086:1533] (rev 03)
Subsystem: Super Micro Computer Inc I210 Gigabit Network Connection [15d9:1533]
Kernel driver in use: igb
Kernel modules: igb
04:00.0 PCI bridge [0604]: ASPEED Technology, Inc. AST1150 PCI-to-PCI Bridge [1a03:1150] (rev 03)
Kernel modules: shpchp
05:00.0 VGA compatible controller [0300]: ASPEED Technology, Inc. ASPEED Graphics Family [1a03:2000] (rev 30)
Subsystem: Super Micro Computer Inc ASPEED Graphics Family [15d9:089a]
Kernel driver in use: ast
Kernel modules: ast


Edit

So das booten jetzt hat bisher ohne Probleme geklappt, aber im Gerätemanager wird ein Code 43 bei der Grafikkarte ausgegeben

Edit:

Ich habe das https://wiki.archlinux.org/index.ph..._load.22_on_Nvidia_GPUs_passed_to_Windows_VMs dazu gefunden.

Was genau heißt das, wo muss ich das einfügen? Dei VendorID sind denke ich die gleichen IDs wie in der vfio.conf (10de:1287,10de:0e0f)​
 
Last edited:
Danke für deine Antwort. Das habe ich ja beides eingefügt. Aber der Code43 scheint ja eher wegen der Virtualisierung zu kommen (Treiber von Nvidia) Viele sollen ja die "hv_vendor_id" ändern. https://forum.proxmox.com/threads/c...10-vm-geforce-750-ti-passthrough.23746/page-2 das hatte ich auch probiert, nur kommt dann ein Fehler beim booten, die Versionen scheinen ja auch abzuweichen.

Oct 02 09:35:55 pve systemd[1]: Started Postfix Mail Transport Agent.
Oct 02 09:35:55 pve iscsid[2396]: iSCSI daemon with pid=2404 started!
Oct 02 09:35:56 pve systemd[1]: Started The Proxmox VE cluster filesystem.
Oct 02 09:35:56 pve systemd[1]: Starting PVE Status Daemon...
Oct 02 09:35:56 pve systemd[1]: Started Regular background program processing daemon.
Oct 02 09:35:56 pve systemd[1]: Starting Proxmox VE firewall...
Oct 02 09:35:56 pve systemd[1]: Starting PVE API Daemon...
Oct 02 09:35:56 pve cron[2980]: (CRON) INFO (pidfile fd = 3)
Oct 02 09:35:56 pve cron[2980]: (CRON) INFO (Running @reboot jobs)
Oct 02 09:35:56 pve pve-firewall[3053]: starting server
Oct 02 09:35:56 pve systemd[1]: Started Proxmox VE firewall.
Oct 02 09:35:56 pve pvestatd[3074]: starting server
Oct 02 09:35:56 pve kernel: ip6_tables: (C) 2000-2006 Netfilter Core Team
Oct 02 09:35:56 pve systemd[1]: Started PVE Status Daemon.
Oct 02 09:35:56 pve kernel: ip_set: protocol 6
Oct 02 09:35:56 pve pvedaemon[3115]: starting server
Oct 02 09:35:56 pve pvedaemon[3115]: starting 3 worker(s)
Oct 02 09:35:56 pve pvedaemon[3115]: worker 3118 started
Oct 02 09:35:56 pve pvedaemon[3115]: worker 3119 started
Oct 02 09:35:56 pve pvedaemon[3115]: worker 3121 started
Oct 02 09:35:56 pve systemd[1]: Started PVE API Daemon.
Oct 02 09:35:56 pve systemd[1]: Starting PVE Cluster Ressource Manager Daemon...
Oct 02 09:35:56 pve systemd[1]: Starting PVE API Proxy Server...
Oct 02 09:35:57 pve pve-ha-crm[3144]: starting server
Oct 02 09:35:57 pve pve-ha-crm[3144]: status change startup => wait_for_quorum
Oct 02 09:35:57 pve systemd[1]: Started PVE Cluster Ressource Manager Daemon.
Oct 02 09:35:57 pve systemd[1]: Starting PVE Local HA Ressource Manager Daemon...
Oct 02 09:35:57 pve pveproxy[3193]: starting server
Oct 02 09:35:57 pve pveproxy[3193]: starting 3 worker(s)
Oct 02 09:35:57 pve pveproxy[3193]: worker 3196 started
Oct 02 09:35:57 pve pveproxy[3193]: worker 3197 started
Oct 02 09:35:57 pve pveproxy[3193]: worker 3198 started
Oct 02 09:35:57 pve systemd[1]: Started PVE API Proxy Server.
Oct 02 09:35:57 pve systemd[1]: Starting PVE SPICE Proxy Server...
Oct 02 09:35:57 pve pve-ha-lrm[3217]: starting server
Oct 02 09:35:57 pve pve-ha-lrm[3217]: status change startup => wait_for_agent_lock
Oct 02 09:35:57 pve systemd[1]: Started PVE Local HA Ressource Manager Daemon.
Oct 02 09:35:57 pve kernel: igb 0000:02:00.0 eno1: igb: eno1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX
Oct 02 09:35:57 pve spiceproxy[3222]: starting server
Oct 02 09:35:57 pve spiceproxy[3222]: starting 1 worker(s)
Oct 02 09:35:57 pve spiceproxy[3222]: worker 3225 started
Oct 02 09:35:57 pve systemd[1]: Started PVE SPICE Proxy Server.
Oct 02 09:35:57 pve systemd[1]: Starting PVE guests...
Oct 02 09:35:57 pve kernel: vmbr0: port 1(eno1) entered blocking state
Oct 02 09:35:57 pve kernel: vmbr0: port 1(eno1) entered forwarding state
Oct 02 09:35:57 pve kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vmbr0: link becomes ready
Oct 02 09:35:58 pve pve-guests[3228]: <root@pam> starting task UPID:pve:00000CB4:0000035E:5BB31FDE:startall::root@pam:
Oct 02 09:35:58 pve pve-guests[3252]: <root@pam> starting task UPID:pve:00000CB9:0000035F:5BB31FDE:qmstart:100:root@pam:
Oct 02 09:35:58 pve pve-guests[3257]: start VM 100: UPID:pve:00000CB9:0000035F:5BB31FDE:qmstart:100:root@pam:
Oct 02 09:35:58 pve pvesh[3228]: Starting VM 100
Oct 02 09:35:58 pve pvesh[3228]: trying to acquire lock...
Oct 02 09:35:58 pve pvesh[3228]: OK
Oct 02 09:35:58 pve systemd[1]: Created slice qemu.slice.
Oct 02 09:35:58 pve systemd[1]: Started 100.scope.
Oct 02 09:35:58 pve systemd-udevd[3343]: Could not generate persistent MAC address for tap100i0: No such file or directory
Oct 02 09:35:58 pve kernel: device tap100i0 entered promiscuous mode
Oct 02 09:35:58 pve kernel: vmbr0: port 2(tap100i0) entered blocking state
Oct 02 09:35:58 pve kernel: vmbr0: port 2(tap100i0) entered disabled state
Oct 02 09:35:58 pve kernel: vmbr0: port 2(tap100i0) entered blocking state
Oct 02 09:35:58 pve kernel: vmbr0: port 2(tap100i0) entered forwarding state
Oct 02 09:36:00 pve systemd[1]: Starting Proxmox VE replication runner...
Oct 02 09:36:00 pve systemd[1]: Started Proxmox VE replication runner.
Oct 02 09:36:03 pve pve-guests[3228]: <root@pam> end task UPID:pve:00000CB4:0000035E:5BB31FDE:startall::root@pam: OK
Oct 02 09:36:03 pve systemd[1]: Started PVE guests.
Oct 02 09:36:03 pve systemd[1]: Reached target Multi-User System.
Oct 02 09:36:03 pve systemd[1]: Reached target Graphical Interface.
Oct 02 09:36:03 pve systemd[1]: Starting Update UTMP about System Runlevel Changes...
Oct 02 09:36:03 pve systemd[1]: Started Update UTMP about System Runlevel Changes.
Oct 02 09:36:03 pve systemd[1]: Startup finished in 4.343s (kernel) + 9.330s (userspace) = 13.673s.
Oct 02 09:36:25 pve systemd-timesyncd[1904]: Synchronized to time server 148.251.68.100:123 (2.debian.pool.ntp.org).
Oct 02 09:37:00 pve systemd[1]: Starting Proxmox VE replication runner...
Oct 02 09:37:01 pve systemd[1]: Started Proxmox VE replication runner.
Oct 02 09:38:00 pve systemd[1]: Starting Proxmox VE replication runner...
Oct 02 09:38:01 pve systemd[1]: Started Proxmox VE replication runner.
Oct 02 09:39:00 pve systemd[1]: Starting Proxmox VE replication runner...
Oct 02 09:39:01 pve systemd[1]: Started Proxmox VE replication runner.
Oct 02 09:39:10 pve pvedaemon[3119]: <root@pam> successful auth for user 'root@pam'
Oct 02 09:39:21 pve pvedaemon[3118]: <root@pam> starting task UPID:pve:00000859:000052A4:5BB320A9:vncproxy:100:root@pam:
Oct 02 09:39:21 pve pvedaemon[2137]: starting vnc proxy UPID:pve:00000859:000052A4:5BB320A9:vncproxy:100:root@pam:
Oct 02 09:39:21 pve qm[2149]: VM 100 qmp command failed - VM 100 qmp command 'change' failed - VNC display not active
Oct 02 09:39:21 pve pvedaemon[2137]: Failed to run vncproxy.
Oct 02 09:39:21 pve pvedaemon[3118]: <root@pam> end task UPID:pve:00000859:000052A4:5BB320A9:vncproxy:100:root@pam: Failed to run vncproxy.
Oct 02 09:39:25 pve pvedaemon[3118]: <root@pam> starting task UPID:pve:0000086F:00005452:5BB320AD:vncshell::root@pam:
Oct 02 09:39:25 pve pvedaemon[2159]: starting termproxy UPID:pve:0000086F:00005452:5BB320AD:vncshell::root@pam:
Oct 02 09:39:26 pve pvedaemon[3119]: <root@pam> successful auth for user 'root@pam'
Oct 02 09:39:26 pve login[2339]: pam_unix(login:session): session opened for user root by (uid=0)
Oct 02 09:39:26 pve systemd[1]: Created slice User Slice of root.
Oct 02 09:39:26 pve systemd[1]: Starting User Manager for UID 0...
Oct 02 09:39:26 pve systemd-logind[2169]: New session 1 of user root.
Oct 02 09:39:26 pve systemd[1]: Started Session 1 of user root.
Oct 02 09:39:26 pve systemd[2350]: pam_unix(systemd-user:session): session opened for user root by (uid=0)
Oct 02 09:39:26 pve systemd[2350]: Listening on GnuPG cryptographic agent and passphrase cache.
Oct 02 09:39:26 pve systemd[2350]: Reached target Paths.
Oct 02 09:39:26 pve systemd[2350]: Reached target Timers.
Oct 02 09:39:26 pve systemd[2350]: Listening on GnuPG cryptographic agent and passphrase cache (restricted).
Oct 02 09:39:26 pve systemd[2350]: Listening on GnuPG cryptographic agent (ssh-agent emulation).
Oct 02 09:39:26 pve systemd[2350]: Listening on GnuPG cryptographic agent (access for web browsers).
Oct 02 09:39:26 pve systemd[2350]: Reached target Sockets.
Oct 02 09:39:26 pve systemd[2350]: Reached target Basic System.
Oct 02 09:39:26 pve systemd[2350]: Reached target Default.
Oct 02 09:39:26 pve systemd[2350]: Startup finished in 12ms.
Oct 02 09:39:26 pve systemd[1]: Started User Manager for UID 0.
Oct 02 09:39:26 pve login[2379]: ROOT LOGIN on '/dev/pts/0'
Oct 02 09:39:31 pve qm[2402]: <root@pam> starting task UPID:pve:000009FC:000056B0:5BB320B3:qmstop:100:root@pam:
Oct 02 09:39:31 pve qm[2556]: stop VM 100: UPID:pve:000009FC:000056B0:5BB320B3:qmstop:100:root@pam:
Oct 02 09:39:31 pve kernel: vmbr0: port 2(tap100i0) entered disabled state
Oct 02 09:39:32 pve qm[2402]: <root@pam> end task UPID:pve:000009FC:000056B0:5BB320B3:qmstop:100:root@pam: OK
Oct 02 09:40:00 pve systemd[1]: Starting Proxmox VE replication runner...
Oct 02 09:40:01 pve systemd[1]: Started Proxmox VE replication runner.
Oct 02 09:41:00 pve systemd[1]: Starting Proxmox VE replication runner...
Oct 02 09:41:01 pve systemd[1]: Started Proxmox VE replication runner.
Oct 02 09:42:00 pve systemd[1]: Starting Proxmox VE replication runner...
Oct 02 09:42:01 pve systemd[1]: Started Proxmox VE replication runner.
Oct 02 09:43:00 pve systemd[1]: Starting Proxmox VE replication runner...
Oct 02 09:43:00 pve systemd[1]: Started Proxmox VE replication runner.
Oct 02 09:44:00 pve systemd[1]: Starting Proxmox VE replication runner...
Oct 02 09:44:01 pve systemd[1]: Started Proxmox VE replication runner.
Oct 02 09:45:00 pve systemd[1]: Starting Proxmox VE replication runner...
Oct 02 09:45:01 pve systemd[1]: Started Proxmox VE replication runner.
Oct 02 09:46:00 pve systemd[1]: Starting Proxmox VE replication runner...
Oct 02 09:46:01 pve systemd[1]: Started Proxmox VE replication runner.
Oct 02 09:47:00 pve systemd[1]: Starting Proxmox VE replication runner...
Oct 02 09:47:01 pve systemd[1]: Started Proxmox VE replication runner.
Oct 02 09:47:59 pve pveproxy[3193]: worker 3198 finished
Oct 02 09:47:59 pve pveproxy[3193]: starting 1 worker(s)
Oct 02 09:47:59 pve pveproxy[3193]: worker 13161 started
Oct 02 09:48:00 pve systemd[1]: Starting Proxmox VE replication runner...
Oct 02 09:48:00 pve systemd[1]: Started Proxmox VE replication runner.
Oct 02 09:48:02 pve pveproxy[13160]: got inotify poll request in wrong process - disabling inotify
Oct 02 09:48:08 pve pvedaemon[3119]: <root@pam> successful auth for user 'root@pam'
Oct 02 09:48:58 pve pvedaemon[3121]: vm 100 - unable to parse value of 'cpu' - format error
cputype: property is missing and it is not optional
Oct 02 09:49:00 pve systemd[1]: Starting Proxmox VE replication runner...
Oct 02 09:49:01 pve systemd[1]: Started Proxmox VE replication runner.
Oct 02 09:49:02 pve pvedaemon[3118]: vm 100 - unable to parse value of 'cpu' - format error
cputype: property is missing and it is not optional
Oct 02 09:49:02 pve qm[14462]: <root@pam> starting task UPID:pve:0000387F:000135C3:5BB322EE:qmstart:100:root@pam:
Oct 02 09:49:02 pve qm[14463]: start VM 100: UPID:pve:0000387F:000135C3:5BB322EE:qmstart:100:root@pam:
Oct 02 09:49:02 pve systemd[1]: Started 100.scope.
Oct 02 09:49:02 pve systemd-udevd[14477]: Could not generate persistent MAC address for tap100i0: No such file or directory
Oct 02 09:49:03 pve kernel: device tap100i0 entered promiscuous mode
Oct 02 09:49:03 pve kernel: vmbr0: port 2(tap100i0) entered blocking state
Oct 02 09:49:03 pve kernel: vmbr0: port 2(tap100i0) entered disabled state
Oct 02 09:49:03 pve kernel: vmbr0: port 2(tap100i0) entered blocking state
Oct 02 09:49:03 pve kernel: vmbr0: port 2(tap100i0) entered forwarding state
Oct 02 09:49:04 pve qm[14462]: <root@pam> end task UPID:pve:0000387F:000135C3:5BB322EE:qmstart:100:root@pam: OK
Oct 02 09:49:06 pve pvestatd[3074]: vm 100 - unable to parse value of 'cpu' - format error
cputype: property is missing and it is not optional
Oct 02 09:49:19 pve pvedaemon[3119]: vm 100 - unable to parse value of 'cpu' - format error
cputype: property is missing and it is not optional
Oct 02 09:50:00 pve systemd[1]: Starting Proxmox VE replication runner...
Oct 02 09:50:01 pve systemd[1]: Started Proxmox VE replication runner.
Oct 02 09:51:00 pve systemd[1]: Starting Proxmox VE replication runner...
Oct 02 09:51:00 pve systemd[1]: Starting Cleanup of Temporary Directories...
Oct 02 09:51:00 pve systemd[1]: Started Cleanup of Temporary Directories.
Oct 02 09:51:00 pve systemd[1]: Started Proxmox VE replication runner.
Oct 02 09:52:00 pve systemd[1]: Starting Proxmox VE replication runner...
Oct 02 09:52:01 pve systemd[1]: Started Proxmox VE replication runner.
Oct 02 09:53:00 pve systemd[1]: Starting Proxmox VE replication runner...
Oct 02 09:53:01 pve systemd[1]: Started Proxmox VE replication runner.
Oct 02 09:54:00 pve systemd[1]: Starting Proxmox VE replication runner...
Oct 02 09:54:01 pve systemd[1]: Started Proxmox VE replication runner.
Oct 02 09:54:10 pve pvedaemon[3118]: <root@pam> successful auth for user 'root@pam'
Oct 02 09:55:00 pve systemd[1]: Starting Proxmox VE replication runner...
Oct 02 09:55:01 pve systemd[1]: Started Proxmox VE replication runner.
Oct 02 09:56:00 pve systemd[1]: Starting Proxmox VE replication runner...
Oct 02 09:56:01 pve systemd[1]: Started Proxmox VE replication runner.
Oct 02 09:57:00 pve systemd[1]: Starting Proxmox VE replication runner...
Oct 02 09:57:00 pve systemd[1]: Started Proxmox VE replication runner.
Oct 02 09:57:06 pve systemd-logind[2169]: Removed session 1.
Oct 02 09:57:06 pve systemd[1]: Stopping User Manager for UID 0...
Oct 02 09:57:06 pve systemd[2350]: Stopped target Default.
Oct 02 09:57:06 pve systemd[2350]: Stopped target Basic System.
Oct 02 09:57:06 pve systemd[2350]: Stopped target Timers.
Oct 02 09:57:06 pve systemd[2350]: Stopped target Sockets.
Oct 02 09:57:06 pve systemd[2350]: Closed GnuPG cryptographic agent and passphrase cache.
Oct 02 09:57:06 pve systemd[2350]: Closed GnuPG cryptographic agent (ssh-agent emulation).
Oct 02 09:57:06 pve systemd[2350]: Closed GnuPG cryptographic agent (access for web browsers).
Oct 02 09:57:06 pve systemd[2350]: Closed GnuPG cryptographic agent and passphrase cache (restricted).
Oct 02 09:57:06 pve systemd[2350]: Stopped target Paths.
Oct 02 09:57:06 pve systemd[2350]: Reached target Shutdown.
Oct 02 09:57:06 pve systemd[2350]: Starting Exit the Session...
Oct 02 09:57:06 pve pvedaemon[3118]: <root@pam> end task UPID:pve:0000086F:00005452:5BB320AD:vncshell::root@pam: OK
Oct 02 09:57:06 pve systemd[2350]: Received SIGRTMIN+24 from PID 20406 (kill).
Oct 02 09:57:06 pve systemd[2359]: pam_unix(systemd-user:session): session closed for user root
Oct 02 09:57:06 pve systemd[1]: Stopped User Manager for UID 0.
Oct 02 09:57:06 pve systemd[1]: Removed slice User Slice of root.
Oct 02 09:57:06 pve pveproxy[13160]: worker exit
Oct 02 09:58:00 pve systemd[1]: Starting Proxmox VE replication runner...
Oct 02 09:58:01 pve systemd[1]: Started Proxmox VE replication runner.
Oct 02 09:59:00 pve systemd[1]: Starting Proxmox VE replication runner...
Oct 02 09:59:01 pve systemd[1]: Started Proxmox VE replication runner.
Oct 02 09:59:34 pve pvedaemon[3118]: <root@pam> starting task UPID:pve:00000307:00022C74:5BB32566:vncproxy:100:root@pam:
Oct 02 09:59:34 pve pvedaemon[775]: starting vnc proxy UPID:pve:00000307:00022C74:5BB32566:vncproxy:100:root@pam:
Oct 02 09:59:34 pve pveproxy[13161]: vm 100 - unable to parse value of 'cpu' - format error
cputype: property is missing and it is not optional
Oct 02 09:59:34 pve qm[777]: VM 100 qmp command failed - VM 100 qmp command 'change' failed - VNC display not active
Oct 02 09:59:34 pve pvedaemon[775]: Failed to run vncproxy.
Oct 02 09:59:34 pve pvedaemon[3118]: <root@pam> end task UPID:pve:00000307:00022C74:5BB32566:vncproxy:100:root@pam: Failed to run vncproxy.
Oct 02 09:59:38 pve pvedaemon[3118]: <root@pam> starting task UPID:pve:00000EB2:00022E1D:5BB3256A:vncshell::root@pam:
Oct 02 09:59:38 pve pvedaemon[3762]: starting termproxy UPID:pve:00000EB2:00022E1D:5BB3256A:vncshell::root@pam:
Oct 02 09:59:39 pve pvedaemon[3121]: <root@pam> successful auth for user 'root@pam'
Oct 02 09:59:39 pve login[3858]: pam_unix(login:session): session opened for user root by root(uid=0)
Oct 02 09:59:39 pve systemd[1]: Created slice User Slice of root.
Oct 02 09:59:39 pve systemd[1]: Starting User Manager for UID 0...
Oct 02 09:59:39 pve systemd-logind[2169]: New session 3 of user root.
Oct 02 09:59:39 pve systemd[1]: Started Session 3 of user root.
Oct 02 09:59:39 pve systemd[3863]: pam_unix(systemd-user:session): session opened for user root by (uid=0)
Oct 02 09:59:39 pve systemd[3863]: Listening on GnuPG cryptographic agent (access for web browsers).
Oct 02 09:59:39 pve systemd[3863]: Listening on GnuPG cryptographic agent and passphrase cache.
Oct 02 09:59:39 pve systemd[3863]: Listening on GnuPG cryptographic agent and passphrase cache (restricted).
Oct 02 09:59:39 pve systemd[3863]: Listening on GnuPG cryptographic agent (ssh-agent emulation).
Oct 02 09:59:39 pve systemd[3863]: Reached target Sockets.
Oct 02 09:59:39 pve systemd[3863]: Reached target Timers.
Oct 02 09:59:39 pve systemd[3863]: Reached target Paths.
Oct 02 09:59:39 pve systemd[3863]: Reached target Basic System.
Oct 02 09:59:39 pve systemd[3863]: Reached target Default.
Oct 02 09:59:39 pve systemd[3863]: Startup finished in 10ms.
Oct 02 09:59:39 pve systemd[1]: Started User Manager for UID 0.
Oct 02 09:59:39 pve login[3875]: ROOT LOGIN on '/dev/pts/0'
Oct 02 10:00:00 pve systemd[1]: Starting Proxmox VE replication runner...
Oct 02 10:00:01 pve systemd[1]: Started Proxmox VE replication runner.
Oct 02 10:00:22 pve qm[10946]: <root@pam> starting task UPID:pve:00002AC3:00023F76:5BB32596:qmstop:100:root@pam:
Oct 02 10:00:22 pve qm[10947]: stop VM 100: UPID:pve:00002AC3:00023F76:5BB32596:qmstop:100:root@pam:
Oct 02 10:00:23 pve kernel: vmbr0: port 2(tap100i0) entered disabled state
Oct 02 10:00:23 pve qm[10946]: <root@pam> end task UPID:pve:00002AC3:00023F76:5BB32596:qmstop:100:root@pam: OK
Oct 02 10:00:30 pve qm[11262]: <root@pam> starting task UPID:pve:00002BFF:00024298:5BB3259E:qmstart:100:root@pam:
Oct 02 10:00:30 pve qm[11263]: start VM 100: UPID:pve:00002BFF:00024298:5BB3259E:qmstart:100:root@pam:
Oct 02 10:00:30 pve systemd[1]: Started 100.scope.
Oct 02 10:00:30 pve systemd-udevd[11277]: Could not generate persistent MAC address for tap100i0: No such file or directory
Oct 02 10:00:31 pve kernel: device tap100i0 entered promiscuous mode
Oct 02 10:00:31 pve kernel: vmbr0: port 2(tap100i0) entered blocking state
Oct 02 10:00:31 pve kernel: vmbr0: port 2(tap100i0) entered disabled state
Oct 02 10:00:31 pve kernel: vmbr0: port 2(tap100i0) entered blocking state
Oct 02 10:00:31 pve kernel: vmbr0: port 2(tap100i0) entered forwarding state
Oct 02 10:00:32 pve qm[11262]: <root@pam> end task UPID:pve:00002BFF:00024298:5BB3259E:qmstart:100:root@pam: OK
Oct 02 10:00:33 pve systemd-logind[2169]: Removed session 3.
Oct 02 10:00:33 pve systemd[1]: Stopping User Manager for UID 0...
Oct 02 10:00:33 pve systemd[3863]: Stopped target Default.
Oct 02 10:00:33 pve systemd[3863]: Stopped target Basic System.
Oct 02 10:00:33 pve systemd[3863]: Stopped target Paths.
Oct 02 10:00:33 pve systemd[3863]: Stopped target Timers.
Oct 02 10:00:33 pve systemd[3863]: Stopped target Sockets.
Oct 02 10:00:33 pve systemd[3863]: Closed GnuPG cryptographic agent (ssh-agent emulation).
Oct 02 10:00:33 pve systemd[3863]: Closed GnuPG cryptographic agent and passphrase cache (restricted).
Oct 02 10:00:33 pve systemd[3863]: Closed GnuPG cryptographic agent and passphrase cache.
Oct 02 10:00:33 pve systemd[3863]: Closed GnuPG cryptographic agent (access for web browsers).
Oct 02 10:00:33 pve systemd[3863]: Reached target Shutdown.
Oct 02 10:00:33 pve systemd[3863]: Starting Exit the Session...
Oct 02 10:00:33 pve systemd[3863]: Received SIGRTMIN+24 from PID 11411 (kill).
Oct 02 10:00:33 pve systemd[3864]: pam_unix(systemd-user:session): session closed for user root
Oct 02 10:00:33 pve systemd[1]: Stopped User Manager for UID 0.
Oct 02 10:00:33 pve systemd[1]: Removed slice User Slice of root.
Oct 02 10:00:33 pve pvedaemon[3118]: <root@pam> end task UPID:pve:00000EB2:00022E1D:5BB3256A:vncshell::root@pam: OK
Oct 02 10:01:00 pve systemd[1]: Starting Proxmox VE replication runner...
Oct 02 10:01:01 pve systemd[1]: Started Proxmox VE replication runner.
Oct 02 10:02:00 pve systemd[1]: Starting Proxmox VE replication runner...
Oct 02 10:02:01 pve systemd[1]: Started Proxmox VE replication runner.
Oct 02 10:03:00 pve systemd[1]: Starting Proxmox VE replication runner...
Oct 02 10:03:01 pve systemd[1]: Started Proxmox VE replication runner.
Oct 02 10:03:09 pve pvedaemon[3118]: <root@pam> successful auth for user 'root@pam'

/sys/kernel/iommu_groups/7/devices/0000:00:1d.1
/sys/kernel/iommu_groups/5/devices/0000:00:17.0
/sys/kernel/iommu_groups/3/devices/0000:00:14.2
/sys/kernel/iommu_groups/3/devices/0000:00:14.0
/sys/kernel/iommu_groups/11/devices/0000:03:00.0
/sys/kernel/iommu_groups/1/devices/0000:01:00.1
/sys/kernel/iommu_groups/1/devices/0000:00:01.0
/sys/kernel/iommu_groups/1/devices/0000:01:00.0
/sys/kernel/iommu_groups/8/devices/0000:00:1d.2
/sys/kernel/iommu_groups/6/devices/0000:00:1d.0
/sys/kernel/iommu_groups/4/devices/0000:00:16.0
/sys/kernel/iommu_groups/12/devices/0000:05:00.0
/sys/kernel/iommu_groups/12/devices/0000:04:00.0
/sys/kernel/iommu_groups/2/devices/0000:00:13.0
/sys/kernel/iommu_groups/10/devices/0000:02:00.0
/sys/kernel/iommu_groups/0/devices/0000:00:00.0
/sys/kernel/iommu_groups/9/devices/0000:00:1f.4
/sys/kernel/iommu_groups/9/devices/0000:00:1f.2
/sys/kernel/iommu_groups/9/devices/0000:00:1f.0
Würde sagen ja.

Und die GraKa hat ein Uefi - hab das mit dem Rom-Parser überprüft. Ist eine GT730
 
Oct 02 09:48:58 pve pvedaemon[3121]: vm 100 - unable to parse value of 'cpu' - format error
cputype: property is missing and it is not optional

ich würde mal versuchen die cpu auf 'host' zu setzen
mit 'qm showcmd ID --pretty'
kann man übrigens den qemu commandline aufruf anzeigen lassen (da sieht man dann dass die vendor_id schon gesetzt ist)
 
Das habe ich eben auch probiert - funktioniert auch nicht.

Ja stimmt, die ist gesetzt. Kann man die denn auch ändern? Ich habe irgendwo gelesen, war evtl auch nur eine Vermutung, dass NVIDIA die vendor ID "proxmox" auch blockiert.

Mir ist auch aufgefallen, dass es gar keine GT730 ist, sondern eine GT720 von MSI - Im Gerätemanager wird aber GT 730 angezeigt.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!