[SOLVED] PCIe passthrough failing on Proxmox 8.1

pengu1n

Member
Apr 19, 2022
57
5
13
I did a search for "kvm run failed Bad address" but I've had no luck on this forum nor the wide net.
I had an Ubuntu 22.04 VM with AMD GPU passthrough that was working on 7.4. I used the Xmas period to upgrade the host but I had some trouble and in the end I had to rebuild it from scratch. The changes are: new proxmox version i.e. 8.1, an additional NVMe disk.
The same VM, I mean with the same settings except the pci IDs now crashes after a few hours or fails to start.
Could someone please give me pointers as to what to try ?
When the VM crashes the error begins with
Code:
"Jan 11 18:49:19 pve QEMU[2270]: error: kvm run failed Bad address"
Code:
root@pve:~# cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-6.5.11-7-pve root=/dev/mapper/pve-root ro quiet
I have nothing added to grub, nothing in /etc/modules, nothing added in the directory /etc/modprobe.d/ i.e no drivers blacklisted and no options passed for vfio or interrupts remapping. Here I might have a mistake because when I had to hose the host, I am going from memory that on 7.4 I didn't have anything there. It had "just worked".
This is the config of the VM in question. Also I have another VM on it, an ubuntu server 22.04 -headless, no GPU passthrough- that doesn't have a problem. Clearly I am doing the GPU passthrough wrongly. The GPU is an AMD RX 6600 XT.
Motherboard: ASUS B550-F GAMING
CPU: AMD Ryzen 9 5900X
Code:
root@pve:~# cat /etc/pve/qemu-server/100.conf
agent: 1
balloon: 0
bios: ovmf
boot: order=scsi0
cores: 12
cpu: host
efidisk0: VMs1:100/vm-100-disk-0.qcow2,efitype=4m,size=528K
hostpci0: 0000:0b:00.0,pcie=1,rombar=0
machine: q35
memory: 16400
meta: creation-qemu=7.2.0,ctime=1703024114
name: ubuntu1
net0: virtio=22:7c:11:b8:c2:d7,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: VMs1:100/vm-100-disk-1.qcow2,iothread=1,size=90G
scsi1: /dev/disk/by-id/ata-Samsung_SSD_870_EVO_1TB_S626NF0R943526A,iothread=1,size=976762584K
scsi2: /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WMC4N2823965,iothread=1,size=2930266584K
scsi3: /dev/disk/by-id/ata-Hitachi_HDP725050GLA360_GEA554RF2GX3SG,iothread=1,size=500106780160
scsi4: /dev/disk/by-id/nvme-INTEL_SSDPEKNU010TZ_BTKA1343082D1P0B,iothread=1,size=1000204632K
scsihw: virtio-scsi-single
smbios1: uuid=218d4a7c-0565-4d95-99aa-dff84488333a
sockets: 1
usb0: host=413c:301a
usb2: host=04b8:1181
usb3: host=04cf:0022
vmgenid: b7c82eb1-99eb-4b2e-b911-b5dce4063a78
And the actual error shows:

Jan 11 18:49:19 pve QEMU[2270]: error: kvm run failed Bad address
Jan 11 18:49:19 pve QEMU[2270]: RAX=0000000000000000 RBX=ffff8ac949f80000 RCX=ffffaf344d40fc9f RDX=0000000000000000
Jan 11 18:49:19 pve QEMU[2270]: RSI=000000000000542e RDI=ffff8ac949f80000 RBP=ffffaf344d40fa90 RSP=ffffaf344d40fa78
Jan 11 18:49:19 pve QEMU[2270]: R8 =0000000000000001 R9 =0000000000000000 R10=0000000000000000 R11=0000000000000000
Jan 11 18:49:19 pve QEMU[2270]: R12=000000000000542e R13=ffffaf3441e150b8 R14=ffff8ac953c46820 R15=ffff8ac951466000
Jan 11 18:49:19 pve QEMU[2270]: RIP=ffffffffc1d68891 RFL=00000286 [--S--P-] CPL=0 II=0 A20=1 SMM=0 HLT=0
Jan 11 18:49:19 pve QEMU[2270]: ES =0000 0000000000000000 00000000 00000000
Jan 11 18:49:19 pve QEMU[2270]: CS =0010 0000000000000000 ffffffff 00a09b00 DPL=0 CS64 [-RA]
Jan 11 18:49:19 pve QEMU[2270]: SS =0018 0000000000000000 ffffffff 00c09300 DPL=0 DS [-WA]
Jan 11 18:49:19 pve QEMU[2270]: DS =0000 0000000000000000 00000000 00000000
Jan 11 18:49:19 pve QEMU[2270]: FS =0000 00007f563277ab00 00000000 00000000
Jan 11 18:49:19 pve QEMU[2270]: GS =0000 ffff8accb0800000 00000000 00000000
Jan 11 18:49:19 pve QEMU[2270]: LDT=0000 0000000000000000 00000000 00000000
Jan 11 18:49:19 pve QEMU[2270]: TR =0040 fffffe6e34f2b000 00004087 00008b00 DPL=0 TSS64-busy
Jan 11 18:49:19 pve QEMU[2270]: GDT= fffffe6e34f29000 0000007f
Jan 11 18:49:19 pve QEMU[2270]: IDT= fffffe0000000000 00000fff
Jan 11 18:49:19 pve QEMU[2270]: CR0=80050033 CR2=0000563249643198 CR3=000000016b9f6000 CR4=00750ef0
Jan 11 18:49:19 pve QEMU[2270]: DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000
Jan 11 18:49:19 pve QEMU[2270]: DR6=00000000ffff0ff0 DR7=0000000000000400
Jan 11 18:49:19 pve QEMU[2270]: EFER=0000000000000d01
Jan 11 18:49:19 pve QEMU[2270]: Code=3b af b8 08 00 00 73 6d 83 e2 02 74 2b 4c 03 ab c0 08 00 00 <45> 8b 6d 00 48 8b 43 08 0f b7 70 3e 66 90 5b 44 89 e8 41 5c 41 5d 5d 31 d2 31 c9 31 f6 31
[/ISPOILER]
 
Last edited:
Additional information:
#pvesh get /nodes/pvehardware/pci --pci-class-blacklist ""
Code:
┌──────────┬────────┬──────────────┬────────────┬────────┬────────────────────────────────────────────
│ class    │ device │ id           │ iommugroup │ vendor │ device_name                               
╞══════════╪════════╪══════════════╪════════════╪════════╪════════════════════════════════════════════
│ 0x010601 │ 0x43eb │ 0000:01:00.1 │         14 │ 0x1022 │ 500 Series Chipset SATA Controller         
├──────────┼────────┼──────────────┼────────────┼────────┼────────────────────────────────────────────
│ 0x010601 │ 0x7901 │ 0000:0e:00.0 │         24 │ 0x1022 │ FCH SATA Controller [AHCI mode]           
├──────────┼────────┼──────────────┼────────────┼────────┼────────────────────────────────────────────
│ 0x010802 │ 0xf1aa │ 0000:05:00.0 │         14 │ 0x8086 │                                           
├──────────┼────────┼──────────────┼────────────┼────────┼────────────────────────────────────────────
│ 0x020000 │ 0x15f3 │ 0000:08:00.0 │         14 │ 0x8086 │ Ethernet Controller I225-V                 
├──────────┼────────┼──────────────┼────────────┼────────┼────────────────────────────────────────────
│ 0x030000 │ 0x0641 │ 0000:03:00.0 │         14 │ 0x10de │ G96C [GeForce 9400 GT]                     
├──────────┼────────┼──────────────┼────────────┼────────┼────────────────────────────────────────────
│ 0x030000 │ 0x73ff │ 0000:0b:00.0 │         17 │ 0x1002 │ Navi 23 [Radeon RX 6600/6600 XT/6600M]     
├──────────┼────────┼──────────────┼────────────┼────────┼────────────────────────────────────────────
│ 0x040300 │ 0xab28 │ 0000:0b:00.1 │         18 │ 0x1002 │ Navi 21/23 HDMI/DP Audio Controller       
├──────────┼────────┼──────────────┼────────────┼────────┼────────────────────────────────────────────
│ 0x040300 │ 0x1487 │ 0000:0d:00.4 │         23 │ 0x1022 │ Starship/Matisse HD Audio Controller       
├──────────┼────────┼──────────────┼────────────┼────────┼────────────────────────────────────────────
│ 0x060000 │ 0x1480 │ 0000:00:00.0 │         -1 │ 0x1022 │ Starship/Matisse Root Complex             
├──────────┼────────┼──────────────┼────────────┼────────┼────────────────────────────────────────────
│ 0x060000 │ 0x1482 │ 0000:00:01.0 │          0 │ 0x1022 │ Starship/Matisse PCIe Dummy Host Bridge   
├──────────┼────────┼──────────────┼────────────┼────────┼────────────────────────────────────────────
│ 0x060000 │ 0x1482 │ 0000:00:02.0 │          2 │ 0x1022 │ Starship/Matisse PCIe Dummy Host Bridge   
├──────────┼────────┼──────────────┼────────────┼────────┼────────────────────────────────────────────
│ 0x060000 │ 0x1482 │ 0000:00:03.0 │          3 │ 0x1022 │ Starship/Matisse PCIe Dummy Host Bridge   
├──────────┼────────┼──────────────┼────────────┼────────┼────────────────────────────────────────────
│ 0x060000 │ 0x1482 │ 0000:00:04.0 │          5 │ 0x1022 │ Starship/Matisse PCIe Dummy Host Bridge   
├──────────┼────────┼──────────────┼────────────┼────────┼────────────────────────────────────────────
│ 0x060000 │ 0x1482 │ 0000:00:05.0 │          6 │ 0x1022 │ Starship/Matisse PCIe Dummy Host Bridge   
├──────────┼────────┼──────────────┼────────────┼────────┼────────────────────────────────────────────
│ 0x060000 │ 0x1482 │ 0000:00:07.0 │          7 │ 0x1022 │ Starship/Matisse PCIe Dummy Host Bridge   
├──────────┼────────┼──────────────┼────────────┼────────┼────────────────────────────────────────────
│ 0x060000 │ 0x1482 │ 0000:00:08.0 │          9 │ 0x1022 │ Starship/Matisse PCIe Dummy Host Bridge   
├──────────┼────────┼──────────────┼────────────┼────────┼────────────────────────────────────────────
│ 0x060000 │ 0x1440 │ 0000:00:18.0 │         13 │ 0x1022 │ Matisse/Vermeer Data Fabric: Device 18h; Fu
├──────────┼────────┼──────────────┼────────────┼────────┼────────────────────────────────────────────
│ 0x060000 │ 0x1441 │ 0000:00:18.1 │         13 │ 0x1022 │ Matisse/Vermeer Data Fabric: Device 18h; Fu
├──────────┼────────┼──────────────┼────────────┼────────┼────────────────────────────────────────────
│ 0x060000 │ 0x1442 │ 0000:00:18.2 │         13 │ 0x1022 │ Matisse/Vermeer Data Fabric: Device 18h; Fu
├──────────┼────────┼──────────────┼────────────┼────────┼────────────────────────────────────────────
│ 0x060000 │ 0x1443 │ 0000:00:18.3 │         13 │ 0x1022 │ Matisse/Vermeer Data Fabric: Device 18h; Fu
├──────────┼────────┼──────────────┼────────────┼────────┼────────────────────────────────────────────
│ 0x060000 │ 0x1444 │ 0000:00:18.4 │         13 │ 0x1022 │ Matisse/Vermeer Data Fabric: Device 18h; Fu
├──────────┼────────┼──────────────┼────────────┼────────┼────────────────────────────────────────────
│ 0x060000 │ 0x1445 │ 0000:00:18.5 │         13 │ 0x1022 │ Matisse/Vermeer Data Fabric: Device 18h; Fu
├──────────┼────────┼──────────────┼────────────┼────────┼────────────────────────────────────────────
│ 0x060000 │ 0x1446 │ 0000:00:18.6 │         13 │ 0x1022 │ Matisse/Vermeer Data Fabric: Device 18h; Fu
├──────────┼────────┼──────────────┼────────────┼────────┼────────────────────────────────────────────
│ 0x060000 │ 0x1447 │ 0000:00:18.7 │         13 │ 0x1022 │ Matisse/Vermeer Data Fabric: Device 18h; Fu
├──────────┼────────┼──────────────┼────────────┼────────┼────────────────────────────────────────────
│ 0x060100 │ 0x790e │ 0000:00:14.3 │         12 │ 0x1022 │ FCH LPC Bridge                             
├──────────┼────────┼──────────────┼────────────┼────────┼────────────────────────────────────────────
│ 0x060400 │ 0x1483 │ 0000:00:01.2 │          1 │ 0x1022 │ Starship/Matisse GPP Bridge               
├──────────┼────────┼──────────────┼────────────┼────────┼────────────────────────────────────────────
│ 0x060400 │ 0x1483 │ 0000:00:03.1 │          4 │ 0x1022 │ Starship/Matisse GPP Bridge               
├──────────┼────────┼──────────────┼────────────┼────────┼────────────────────────────────────────────
│ 0x060400 │ 0x1484 │ 0000:00:07.1 │          8 │ 0x1022 │ Starship/Matisse Internal PCIe GPP Bridge 0
├──────────┼────────┼──────────────┼────────────┼────────┼────────────────────────────────────────────
│ 0x060400 │ 0x1484 │ 0000:00:08.1 │         10 │ 0x1022 │ Starship/Matisse Internal PCIe GPP Bridge 0
├──────────┼────────┼──────────────┼────────────┼────────┼────────────────────────────────────────────
│ 0x060400 │ 0x1484 │ 0000:00:08.3 │         11 │ 0x1022 │ Starship/Matisse Internal PCIe GPP Bridge 0
├──────────┼────────┼──────────────┼────────────┼────────┼────────────────────────────────────────────
│ 0x060400 │ 0x43e9 │ 0000:01:00.2 │         14 │ 0x1022 │ 500 Series Chipset Switch Upstream Port   
├──────────┼────────┼──────────────┼────────────┼────────┼────────────────────────────────────────────
│ 0x060400 │ 0x43ea │ 0000:02:00.0 │         14 │ 0x1022 │                                           
├──────────┼────────┼──────────────┼────────────┼────────┼────────────────────────────────────────────
│ 0x060400 │ 0x43ea │ 0000:02:01.0 │         14 │ 0x1022 │                                           
├──────────┼────────┼──────────────┼────────────┼────────┼────────────────────────────────────────────
│ 0x060400 │ 0x43ea │ 0000:02:02.0 │         14 │ 0x1022 │                                           
├──────────┼────────┼──────────────┼────────────┼────────┼────────────────────────────────────────────
│ 0x060400 │ 0x43ea │ 0000:02:03.0 │         14 │ 0x1022 │                                           
├──────────┼────────┼──────────────┼────────────┼────────┼────────────────────────────────────────────
│ 0x060400 │ 0x43ea │ 0000:02:08.0 │         14 │ 0x1022 │                                           
├──────────┼────────┼──────────────┼────────────┼────────┼────────────────────────────────────────────
│ 0x060400 │ 0x43ea │ 0000:02:09.0 │         14 │ 0x1022 │                                           
├──────────┼────────┼──────────────┼────────────┼────────┼────────────────────────────────────────────
│ 0x060400 │ 0x1478 │ 0000:09:00.0 │         15 │ 0x1002 │ Navi 10 XL Upstream Port of PCI Express Swi
├──────────┼────────┼──────────────┼────────────┼────────┼────────────────────────────────────────────
│ 0x060400 │ 0x1479 │ 0000:0a:00.0 │         16 │ 0x1002 │ Navi 10 XL Downstream Port of PCI Express S
├──────────┼────────┼──────────────┼────────────┼────────┼────────────────────────────────────────────
│ 0x080600 │ 0x1481 │ 0000:00:00.2 │         -1 │ 0x1022 │ Starship/Matisse IOMMU                     
├──────────┼────────┼──────────────┼────────────┼────────┼────────────────────────────────────────────
│ 0x0c0330 │ 0x43ee │ 0000:01:00.0 │         14 │ 0x1022 │ 500 Series Chipset USB 3.1 XHCI Controller
├──────────┼────────┼──────────────┼────────────┼────────┼────────────────────────────────────────────
│ 0x0c0330 │ 0x149c │ 0000:0d:00.3 │         22 │ 0x1022 │ Matisse USB 3.0 Host Controller           
├──────────┼────────┼──────────────┼────────────┼────────┼────────────────────────────────────────────
│ 0x0c0500 │ 0x790b │ 0000:00:14.0 │         12 │ 0x1022 │ FCH SMBus Controller                       
├──────────┼────────┼──────────────┼────────────┼────────┼────────────────────────────────────────────
│ 0x108000 │ 0x1486 │ 0000:0d:00.1 │         21 │ 0x1022 │ Starship/Matisse Cryptographic Coprocessor
├──────────┼────────┼──────────────┼────────────┼────────┼────────────────────────────────────────────
│ 0x130000 │ 0x148a │ 0000:0c:00.0 │         19 │ 0x1022 │ Starship/Matisse PCIe Dummy Function       
├──────────┼────────┼──────────────┼────────────┼────────┼────────────────────────────────────────────
│ 0x130000 │ 0x1485 │ 0000:0d:00.0 │         20 │ 0x1022 │ Starship/Matisse Reserved SPP             
└──────────┴────────┴──────────────┴────────────┴────────┴────────────────────────────────────────────
Code:
root@pve:~# dmesg | grep -e DMAR -e IOMMU -e AMD-Vi
[    0.113674] AMD-Vi: Using global IVHD EFR:0x0, EFR2:0x0
[    0.849198] pci 0000:00:00.2: AMD-Vi: IOMMU performance counters supported
[    0.854622] pci 0000:00:00.2: AMD-Vi: Found IOMMU cap 0x40
[    0.854625] AMD-Vi: Extended features (0x58f77ef22294a5a, 0x0): PPR NX GT IA PC GA_vAPIC
[    0.854633] AMD-Vi: Interrupt remapping enabled
[    0.854954] perf/amd_iommu: Detected AMD IOMMU #0 (2 banks, 4 counters/bank).
[    6.879715] AMD-Vi: AMD IOMMUv2 loaded and initialized
 
Good day. I'm trying all the different combinations of options to select in the pci passthrough on the UI.
I should be clearer. The VM starts and runs for a few hours. Then on the proxmox UI it shows in the summary: "running (internal-error)" and a Triangle exclamation mark icon tells it has a problem. Then the error in this thread are from the syslog from the node, still from the UI.
1- Only selecting "pci-express" - VM fails this way.
2- "pci-express" and "primary-gpu" - fails too.
 
mhmm config etc. looks ok AFAICS

it might be kernel problem (though a rare one if that)

there are a few reports on the internet about that:

https://bugs.launchpad.net/ubuntu/+source/linux-starfive-5.17/+bug/1985067
https://access.redhat.com/solutions/5624631
https://forums.unraid.net/topic/130342-gpu-passthrough-kvm-run-failed-bad-address/

the last one said that downgrading the in guest driver version helped

though there is no clear answer...

can you post the whole journal from boot to shortly after the error?
 
Sure thing and many thanks. Indeed there are very few hits on the web. I shall do that post of log when it happens. I have for now started it but with the change :
3- "pci-express" , "ROM-Bar" and "primary-gpu". I've of course been reading the wiki for what should be used, but trying every combination. Have a grreat day!.
 
Hi @dcsapak . I rebooted yesterday Fri 12/01/2024 around 22:30 . I piped the journal from 00:00 so there are messages prior to this point. Left them there. Clean boot from "Jan 12 22:30:56 pve systemd-journald[552]: Journal stopped". Please find attached.
p.s. for my records. this one is
3- "pci-express" , "rom bar" enabled and "primary-gpu" - fails too.
I'm really hoping you or any can spot a clue
 

Attachments

Hi. I found there were a few BIOS updates available for the host. I've applied the latest but that hasn't made a diffference. The VM still fails after some hours. The syslog on the host is still showing the same, limited information.
One thing I can think of is to setup a remote syslog server to catch the syslog messages from the problematic guest VM. Maybe there are clues there.
I'll be back when I have something.
 
I think I have made progress. @dcsapak I'll be grateful for your opinion, or anyone who might have come across this.
The VM breaks as soon as /usr/bin/fwupdmgr refresh is invoked. Not doe by me, seems an Ubuntu feature.
Logs match exactly at the time. I can see now that when Proxmox stops being able to communicate, is when the VM OS initiated a fwupdmgr action.
Now I'm going to find how to disable that -not needed on a VM anyway- inside the VM and reboot. Strangely is the sam OS version when the VM ran on PM 7.4.
Edit: Rebooted pve node, started VM and disabled the fwupd service. Now I wait. Fingers crossed.
 

Attachments

Last edited:
Update: failed again but using the same method, I can see for the same reason. Seems I haven't disabled the service properly. I'll do this better now post reboot of node.
 
mhmm.. it is possible that fwupdmgr tries to communicate with the gpu in some way and that somehow triggers a bug in the kernel... but it's very possible that this is a hardware specific thing, and not a general issue, otherwise there would be probably many more reports about this..
 
mhmm.. it is possible that fwupdmgr tries to communicate with the gpu in some way and that somehow triggers a bug in the kernel... but it's very possible that this is a hardware specific thing, and not a general issue, otherwise there would be probably many more reports about this..
yes, that's what this is looking like. Thanks for glancing it. Once I've confirmed the workaround, will mark it as solved.
 
The VM has now been up without the failures for over 24 hrs. It seems that was the problem. fwupd causing it.
The strange thing is that I installed the same OS version since I lost it in the proxmox migration, so re-installed OS from ISO, luckily I always put /home in a separate partition or disk. Maybe the fresh installation creates the fwupd timed updates and my previous install had been an upgrade that didn't have it. Marking at solved.
 
I'm likely affected by the exact same issue. The Ubuntu VM locks up after less than 24 hours. I've disabled the GPU passthrough for now and if it does not occur again I'll try disabling the service you mentioned...

Code:
Jan 19 06:44:59 proxmox systemd[1]: man-db.service: Deactivated successfully.
Jan 19 06:44:59 proxmox systemd[1]: Finished man-db.service - Daily man-db regeneration.
Jan 19 06:56:31 proxmox QEMU[3126810]: error: kvm run failed Bad address
Jan 19 06:56:31 proxmox QEMU[3126810]: RAX=ffffaaca01000000 RBX=0000000000000000 RCX=0000000000000000 RDX=ffffffffffffffc0
Jan 19 06:56:31 proxmox QEMU[3126810]: RSI=ffff94190574f000 RDI=ffff94190574f000 RBP=ffffaaca01c83aa8 RSP=ffffaaca01c83a38
Jan 19 06:56:31 proxmox QEMU[3126810]: R8 =ffff94190574e000 R9 =0000000000000000 R10=0000000000000000 R11=0000000000006d5e
Jan 19 06:56:31 proxmox QEMU[3126810]: R12=0000000000000001 R13=00000005818a001b R14=0000000000000001 R15=ffff9415fec6a800
Jan 19 06:56:31 proxmox QEMU[3126810]: RIP=ffffffffc076207e RFL=00010246 [---Z-P-] CPL=0 II=0 A20=1 SMM=0 HLT=0
Jan 19 06:56:31 proxmox QEMU[3126810]: ES =0000 0000000000000000 ffffffff 00c00000
Jan 19 06:56:31 proxmox QEMU[3126810]: CS =0010 0000000000000000 ffffffff 00a09b00 DPL=0 CS64 [-RA]
Jan 19 06:56:31 proxmox QEMU[3126810]: SS =0018 0000000000000000 ffffffff 00c09300 DPL=0 DS   [-WA]
Jan 19 06:56:31 proxmox QEMU[3126810]: DS =0000 0000000000000000 ffffffff 00c00000
Jan 19 06:56:31 proxmox QEMU[3126810]: FS =0000 00007f4344721b38 ffffffff 00c00000
Jan 19 06:56:31 proxmox QEMU[3126810]: GS =0000 ffff941a8dac0000 ffffffff 00c00000
Jan 19 06:56:31 proxmox QEMU[3126810]: LDT=0000 0000000000000000 ffffffff 00c00000
Jan 19 06:56:31 proxmox QEMU[3126810]: TR =0040 fffffe00000b4000 00004087 00008b00 DPL=0 TSS64-busy
Jan 19 06:56:31 proxmox QEMU[3126810]: GDT=     fffffe00000b2000 0000007f
Jan 19 06:56:31 proxmox QEMU[3126810]: IDT=     fffffe0000000000 00000fff
Jan 19 06:56:31 proxmox QEMU[3126810]: CR0=80050033 CR2=00007f43531a1000 CR3=000000010f8b8000 CR4=00750ee0
Jan 19 06:56:31 proxmox QEMU[3126810]: DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000
Jan 19 06:56:31 proxmox QEMU[3126810]: DR6=00000000ffff0ff0 DR7=0000000000000400
Jan 19 06:56:31 proxmox QEMU[3126810]: EFER=0000000000000d01
Jan 19 06:56:31 proxmox QEMU[3126810]: Code=ee e8 20 3c 02 00 49 8b 87 38 01 00 00 48 8b 80 78 08 00 00 <89> 98 84 f0 04 00 41 0f b6 87 61 03 00 00 41 83 c6 01 41 39 c6 0f 8f ce f7 ff ff 45 8d 66
Jan 19 07:17:01 proxmox CRON[3267760]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Jan 19 07:17:01 proxmox CRON[3267761]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!