New to Proxmox... setup a few VMs on a new box and they are all hanging.

d3vnull

New Member
Jul 14, 2021
11
0
1
40
Debian is installed on all 4 machines.

One machine is currently trying to download the bitcoin blockchain for instance.

after about an hour or so I have the following error:

same thing happens on the other machines downloading other blockchains...

I've tried adjusting the vm.dirty_ratio and vm.dirty_background_ratio with no effect... not sure what is going on here.

Also if I run hdparm -I /dev/sda, I get the following: SG_IO: bad/missing sense data. Is this normal? These are on ZFS2 arrays using 10x 4TB WD Red Pro drives.

Trying to figure out my next step and am lost. See, attached images.
 

Attachments

  • 2021_07_16_proxmox_ticket2.JPG
    2021_07_16_proxmox_ticket2.JPG
    43.1 KB · Views: 18
  • 2021_07_16_proxmox_ticket1.JPG
    2021_07_16_proxmox_ticket1.JPG
    230.8 KB · Views: 17
Last edited:
Little more about my setup:

Mainboard: Gigabyte x399 Aorus Extreme
CPU: Threadripper 2990WX
Ram: G.SKILL Ripjaws V Series 128GB (8 x 16GB) 288-Pin DDR4 SDRAM DDR4 3200 (PC4 25600)
Boot Drive & Fast VMs: 3x 1TB Samsung Evo 970 Pro EVO in ZFS1
Storage & Slower VMs: 10x 4TB Western Digital Red Pro in ZFS2
 
How are you using 4.x kernel? Shouldn't you be using 5.11 with Threadripper?
I installed Debian 10.10.0, which is the stable version of debian and the version of linux I am most familiar with onto the VMs. I am not aware of the threadripper not working with this OS/kernel and can't seem to find anything directing me that way on google.
 
Attached is the syslog since I built the host yesterday... last time this occurred was about 3-4 hours ago.
 

Attachments

Hi there are a lot of errors on nvme, can you maybe download hdd sentinel and execute it on host? They are really good on parsing errors.
 
Hi there are a lot of errors on nvme, can you maybe download hdd sentinel and execute it on host? They are really good on parsing errors.
Code:
Hard Disk Sentinel for LINUX console 0.19c.9986 (c) 2021 info@hdsentinel.com
Start with -r [reportfile] to save data to report, -h for help

Examining hard disk configuration ...

HDD Device  0: /dev/nvme0
HDD Model ID : Samsung SSD 970 EVO 1TB
HDD Serial No: S467NX0M836342H
HDD Revision : 2B2QEXE7
HDD Size     : 953869 MB
Interface    : NVMe
Temperature  : 44 °C
Highest Temp.: 44 °C
Health       : 98 %
Performance  : 100 %
Power on time: 543 days, 8 hours
Est. lifetime: more than 1000 days
Total written: 32.60 TB
  The status of the solid state disk is PERFECT. Problematic or weak sectors were not found.
  The health is determined by SSD specific S.M.A.R.T. attribute(s):  Available Spare (Percent), Percentage Used
    No actions needed.

HDD Device  1: /dev/nvme1
HDD Model ID : Samsung SSD 970 EVO 1TB
HDD Serial No: S467NX0M840215F
HDD Revision : 2B2QEXE7
HDD Size     : 953869 MB
Interface    : NVMe
Temperature  : 42 °C
Highest Temp.: 42 °C
Health       : 99 %
Performance  : 100 %
Power on time: 510 days, 16 hours
Est. lifetime: more than 1000 days
Total written: 28.73 TB
  The status of the solid state disk is PERFECT. Problematic or weak sectors were not found.
  The health is determined by SSD specific S.M.A.R.T. attribute(s):  Available Spare (Percent), Percentage Used
    No actions needed.

HDD Device  2: /dev/nvme2
HDD Model ID : Samsung SSD 970 EVO 1TB
HDD Serial No: S467NX0M711169T
HDD Revision : 2B2QEXE7
HDD Size     : 953869 MB
Interface    : NVMe
Temperature  : 46 °C
Highest Temp.: 46 °C
Health       : 98 %
Performance  : 100 %
Power on time: 509 days, 17 hours
Est. lifetime: more than 1000 days
Total written: 42.34 TB
  The status of the solid state disk is PERFECT. Problematic or weak sectors were not found.
  The health is determined by SSD specific S.M.A.R.T. attribute(s):  Available Spare (Percent), Percentage Used
    No actions needed.

HDD Device  3: /dev/sda
HDD Model ID : WDC WD4003FFBX-68MU3N0
HDD Serial No: V1JYPJBG
HDD Revision : 83.00A83
HDD Size     : 3815448 MB
Interface    : S-ATA Gen3, 6 Gbps
Temperature  : 38 °C
Highest Temp.: 42 °C
Health       : 100 %
Performance  : 100 %
Power on time: 2 days, 17 hours
Est. lifetime: more than 1000 days
  The hard disk status is PERFECT. Problematic or weak sectors were not found and there are no spin up or data transfer errors.
    No actions needed.

HDD Device  4: /dev/sdb
HDD Model ID : WDC WD4003FFBX-68MU3N0
HDD Serial No: V1JYRK5G
HDD Revision : 83.00A83
HDD Size     : 3815448 MB
Interface    : S-ATA Gen3, 6 Gbps
Temperature  : 39 °C
Highest Temp.: 43 °C
Health       : 100 %
Performance  : 100 %
Power on time: 2 days, 21 hours
Est. lifetime: more than 1000 days
  The hard disk status is PERFECT. Problematic or weak sectors were not found and there are no spin up or data transfer errors.
    No actions needed.

HDD Device  5: /dev/sdc
HDD Model ID : WDC WD4003FFBX-68MU3N0
HDD Serial No: V1JYRKKG
HDD Revision : 83.00A83
HDD Size     : 3815448 MB
Interface    : S-ATA Gen3, 6 Gbps
Temperature  : 37 °C
Highest Temp.: 40 °C
Health       : 100 %
Performance  : 100 %
Power on time: 2 days, 21 hours
Est. lifetime: more than 1000 days
  The hard disk status is PERFECT. Problematic or weak sectors were not found and there are no spin up or data transfer errors.
    No actions needed.

HDD Device  6: /dev/sdd
HDD Model ID : WDC WD4003FFBX-68MU3N0
HDD Serial No: V1JYRKSG
HDD Revision : 83.00A83
HDD Size     : 3815448 MB
Interface    : S-ATA Gen3, 6 Gbps
Temperature  : 36 °C
Highest Temp.: 40 °C
Health       : 100 %
Performance  : 100 %
Power on time: 2 days, 21 hours
Est. lifetime: more than 1000 days
  The hard disk status is PERFECT. Problematic or weak sectors were not found and there are no spin up or data transfer errors.
    No actions needed.

HDD Device  7: /dev/sde
HDD Model ID : WDC WD4003FFBX-68MU3N0
HDD Serial No: V1JYSUJG
HDD Revision : 83.00A83
HDD Size     : 3815448 MB
Interface    : S-ATA Gen3, 6 Gbps
Temperature  : 36 °C
Highest Temp.: 39 °C
Health       : 100 %
Performance  : 100 %
Power on time: 2 days, 16 hours
Est. lifetime: more than 1000 days
  The hard disk status is PERFECT. Problematic or weak sectors were not found and there are no spin up or data transfer errors.
    No actions needed.

HDD Device  8: /dev/sdf
HDD Model ID : WDC WD4003FFBX-68MU3N0
HDD Serial No: V1JYMUEG
HDD Revision : 83.00A83
HDD Size     : 3815448 MB
Interface    : S-ATA Gen3, 6 Gbps
Temperature  : 37 °C
Highest Temp.: 41 °C
Health       : 100 %
Performance  : 100 %
Power on time: 2 days, 21 hours
Est. lifetime: more than 1000 days
  The hard disk status is PERFECT. Problematic or weak sectors were not found and there are no spin up or data transfer errors.
    No actions needed.

HDD Device  9: /dev/sdg
HDD Model ID : WDC WD4003FFBX-68MU3N0
HDD Serial No: V1JYPHNG
HDD Revision : 83.00A83
HDD Size     : 3815448 MB
Interface    : S-ATA Gen3, 6 Gbps
Temperature  : 48 °C
Highest Temp.: 51 °C
Health       : 100 %
Performance  : 100 %
Power on time: 2 days, 21 hours
Est. lifetime: more than 1000 days
  The hard disk status is PERFECT. Problematic or weak sectors were not found and there are no spin up or data transfer errors.
    No actions needed.

HDD Device 10: /dev/sdh
HDD Model ID : WDC WD4003FFBX-68MU3N0
HDD Serial No: V1JYMUDG
HDD Revision : 83.00A83
HDD Size     : 3815448 MB
Interface    : S-ATA Gen3, 6 Gbps
Temperature  : 39 °C
Highest Temp.: 43 °C
Health       : 100 %
Performance  : 100 %
Power on time: 2 days, 21 hours
Est. lifetime: more than 1000 days
  The hard disk status is PERFECT. Problematic or weak sectors were not found and there are no spin up or data transfer errors.
    No actions needed.

HDD Device 11: /dev/sdi
HDD Model ID : WDC WD4003FFBX-68MU3N0
HDD Serial No: V1JXTGXK
HDD Revision : 83.00A83
HDD Size     : 3815448 MB
Interface    : S-ATA Gen3, 6 Gbps
Temperature  : 46 °C
Highest Temp.: 49 °C
Health       : 100 %
Performance  : 100 %
Power on time: 2 days, 21 hours
Est. lifetime: more than 1000 days
  The hard disk status is PERFECT. Problematic or weak sectors were not found and there are no spin up or data transfer errors.
    No actions needed.

HDD Device 12: /dev/sdj
HDD Model ID : WDC WD4003FFBX-68MU3N0
HDD Serial No: V1JYRKTG
HDD Revision : 83.00A83
HDD Size     : 3815448 MB
Interface    : S-ATA Gen3, 6 Gbps
Temperature  : 44 °C
Highest Temp.: 48 °C
Health       : 100 %
Performance  : 100 %
Power on time: 2 days, 21 hours
Est. lifetime: more than 1000 days
  The hard disk status is PERFECT. Problematic or weak sectors were not found and there are no spin up or data transfer errors.
    No actions needed.
 
Hi there are a lot of errors on nvme, can you maybe download hdd sentinel and execute it on host? They are really good on parsing errors.
I notice in the syslog here... the uncorrectable I/O failure. This is with the spinny drives though. MATURIN_STORAGE is the 10 drive ZFS2 array.

Code:
Jul 16 00:17:47 maturin kernel: [ 8451.675973] zio pool=MATURIN_STORAGE vdev=/dev/disk/by-id/ata-WDC_WD4003FFBX-68MU3N0_V1JXTGXK-part1 error=5 type=2 offset=271782375424 size=16384 flags=180880
Jul 16 00:17:47 maturin kernel: [ 8451.675985] zio pool=MATURIN_STORAGE vdev=/dev/disk/by-id/ata-WDC_WD4003FFBX-68MU3N0_V1JXTGXK-part1 error=5 type=1 offset=270336 size=8192 flags=b08c1
Jul 16 00:17:47 maturin kernel: [ 8451.675987] zio pool=MATURIN_STORAGE vdev=/dev/disk/by-id/ata-WDC_WD4003FFBX-68MU3N0_V1JXTGXK-part1 error=5 type=2 offset=271782395904 size=16384 flags=180880
Jul 16 00:17:47 maturin kernel: [ 8451.675990] zio pool=MATURIN_STORAGE vdev=/dev/disk/by-id/ata-WDC_WD4003FFBX-68MU3N0_V1JXTGXK-part1 error=5 type=1 offset=137857998848 size=53248 flags=40080c80
Jul 16 00:17:47 maturin kernel: [ 8451.675994] zio pool=MATURIN_STORAGE vdev=/dev/disk/by-id/ata-WDC_WD4003FFBX-68MU3N0_V1JXTGXK-part1 error=5 type=2 offset=271782412288 size=16384 flags=180880
Jul 16 00:17:47 maturin kernel: [ 8451.676002] zio pool=MATURIN_STORAGE vdev=/dev/disk/by-id/ata-WDC_WD4003FFBX-68MU3N0_V1JXTGXK-part1 error=5 type=2 offset=271782428672 size=16384 flags=180880
Jul 16 00:17:47 maturin kernel: [ 8451.676010] zio pool=MATURIN_STORAGE vdev=/dev/disk/by-id/ata-WDC_WD4003FFBX-68MU3N0_V1JXTGXK-part1 error=5 type=2 offset=271782445056 size=16384 flags=180880
Jul 16 00:17:47 maturin kernel: [ 8451.679082] WARNING: Pool 'MATURIN_STORAGE' has encountered an uncorrectable I/O failure and has been suspended.
Jul 16 00:17:47 maturin kernel: [ 8451.679082]
Jul 16 00:17:47 maturin kernel: [ 8451.680996] zio pool=MATURIN_STORAGE vdev=/dev/disk/by-id/ata-WDC_WD4003FFBX-68MU3N0_V1JXTGXK-part1 error=5 type=2 offset=268381929472 size=16384 flags=180880
Jul 16 00:17:47 maturin kernel: [ 8451.681024] zio pool=MATURIN_STORAGE vdev=/dev/disk/by-id/ata-WDC_WD4003FFBX-68MU3N0_V1JXTGXK-part1 error=5 type=2 offset=271783514112 size=704512 flags=40080c80
Jul 16 00:17:47 maturin kernel: [ 8451.681062] zio pool=MATURIN_STORAGE vdev=/dev/disk/by-id/ata-WDC_WD4003FFBX-68MU3N0_V1JXTGXK-part1 error=5 type=2 offset=163600596992 size=8192 flags=40080c80
Jul 16 00:17:47 maturin kernel: [ 8451.681072] zio pool=MATURIN_STORAGE vdev=/dev/disk/by-id/ata-WDC_WD4003FFBX-68MU3N0_V1JXTGXK-part1 error=5 type=2 offset=163600609280 size=155648 flags=40080c80
Jul 16 00:17:47 maturin kernel: [ 8451.681630] zio pool=MATURIN_STORAGE vdev=/dev/disk/by-id/ata-WDC_WD4003FFBX-68MU3N0_V1JXTGXK-part1 error=5 type=2 offset=271782461440 size=1052672 flags=40080c80
Jul 16 00:17:47 maturin kernel: [ 8451.706912] WARNING: Pool 'MATURIN_STORAGE' has encountered an uncorrectable I/O failure and has been suspended.
Jul 16 00:17:47 maturin kernel: [ 8451.706912]
Jul 16 00:17:47 maturin kernel: [ 8451.727141] WARNING: Pool 'MATURIN_STORAGE' has encountered an uncorrectable I/O failure and has been suspended.
Jul 16 00:17:47 maturin kernel: [ 8451.727141]
Jul 16 00:17:47 maturin kernel: [ 8451.729652] WARNING: Pool 'MATURIN_STORAGE' has encountered an uncorrectable I/O failure and has been suspended.
Jul 16 00:17:47 maturin kernel: [ 8451.729652]
Jul 16 00:17:47 maturin kernel: [ 8451.732240] WARNING: Pool 'MATURIN_STORAGE' has encountered an uncorrectable I/O failure and has been suspended.
Jul 16 00:17:47 maturin kernel: [ 8451.732240]
Jul 16 00:17:47 maturin zed: eid=26 class=io pool='MATURIN_STORAGE' vdev=ata-WDC_WD4003FFBX-68MU3N0_V1JXTGXK-part1 size=8192 offset=4000776200192 priority=0 err=5 flags=0xb08c1
Jul 16 00:17:47 maturin zed[51819]: Missed 1 events
Jul 16 00:17:47 maturin kernel: [ 8451.734862] WARNING: Pool 'MATURIN_STORAGE' has encountered an uncorrectable I/O failure and has been suspended.
Jul 16 00:17:47 maturin kernel: [ 8451.734862]
Jul 16 00:17:47 maturin kernel: [ 8451.737379] WARNING: Pool 'MATURIN_STORAGE' has encountered an uncorrectable I/O failure and has been suspended.
Jul 16 00:17:47 maturin kernel: [ 8451.737379]
Jul 16 00:17:47 maturin kernel: [ 8451.740009] WARNING: Pool 'MATURIN_STORAGE' has encountered an uncorrectable I/O failure and has been suspended.
Jul 16 00:17:47 maturin kernel: [ 8451.740009]
Jul 16 00:17:47 maturin kernel: [ 8451.742501] WARNING: Pool 'MATURIN_STORAGE' has encountered an uncorrectable I/O failure and has been suspended.
Jul 16 00:17:47 maturin kernel: [ 8451.742501]
Jul 16 00:17:47 maturin kernel: [ 8451.745316] WARNING: Pool 'MATURIN_STORAGE' has encountered an uncorrectable I/O failure and has been suspended.
Jul 16 00:17:47 maturin kernel: [ 8451.745316]
Jul 16 00:17:47 maturin kernel: [ 8451.747809] WARNING: Pool 'MATURIN_STORAGE' has encountered an uncorrectable I/O failure and has been suspended.
Jul 16 00:17:47 maturin kernel: [ 8451.747809]
Jul 16 00:17:47 maturin zed: eid=66 class=io pool='MATURIN_STORAGE' size=8192 offset=1378544832512 priority=0 err=6 flags=0x100880 bookmark=6535:1:0:1564534
Jul 16 00:17:47 maturin zed[51819]: Missed 1 events
Jul 16 00:17:47 maturin zed: eid=71 class=io pool='MATURIN_STORAGE' size=8192 offset=1378546184192 priority=0 err=6 flags=0x100880 bookmark=6535:1:0:1564588
Jul 16 00:17:48 maturin zed: eid=72 class=io pool='MATURIN_STORAGE' size=8192 offset=1378543235072 priority=0 err=6 flags=0x100880 bookmark=6535:1:0:1564467
Jul 16 00:17:48 maturin zed: eid=73 class=io pool='MATURIN_STORAGE' size=8192 offset=1378542620672 priority=0 err=6 flags=0x100880 bookmark=6535:1:0:1564443
Jul 16 00:17:48 maturin zed: eid=74 class=io pool='MATURIN_STORAGE' size=8192 offset=1378544463872 priority=0 err=6 flags=0x100880 bookmark=6535:1:0:1564518
Jul 16 00:17:48 maturin zed: eid=75 class=io pool='MATURIN_STORAGE' size=8192 offset=1378542989312 priority=0 err=6 flags=0x100880 bookmark=6535:1:0:1564458
 
Last edited:
bump. I add write back cache, which seems like it made the server last longer before hanging up, but still hung eventually.
 
This has now occurred on all 5 VMs i've created on the server even though 3 VMs are connected only to the 10 drive spinny ZFS2 array and 2 VMs are only connected to the 3 NVMe ZFS1 array...

Attached are screenshots of the frozen VMs consoles...

If I was to order a subscription, is this something the proxmox team would help me solve?
 

Attachments

  • 2021_07_19_ws_vm.JPG
    2021_07_19_ws_vm.JPG
    161.4 KB · Views: 3
  • 2021_07_19_db_vm.JPG
    2021_07_19_db_vm.JPG
    179.9 KB · Views: 4
  • 2021_07_19_arms_vm.JPG
    2021_07_19_arms_vm.JPG
    165.7 KB · Views: 4
  • 2021_07_19_litecoin_vm.JPG
    2021_07_19_litecoin_vm.JPG
    207.1 KB · Views: 3
  • 2021_07_19_bitcoin_vm.JPG
    2021_07_19_bitcoin_vm.JPG
    228.4 KB · Views: 3
Replaced the sata cables to verify that was not the problem...

Last set of errors on the host:

Code:
Jul 19 22:04:00 maturin systemd[1]: Finished Proxmox VE replication runner.
Jul 19 22:04:21 maturin kernel: general protection fault, probably for non-canonical address 0x18b01f64c661f70: 0000 [#1] SMP NOPTI
Jul 19 22:04:21 maturin kernel: CPU: 3 PID: 4353 Comm: z_ioctl_int Tainted: P           O      5.11.22-1-pve #1
Jul 19 22:04:21 maturin kernel: Hardware name: Gigabyte Technology Co., Ltd. X399 AORUS XTREME/X399 AORUS XTREME-CF, BIOS F5 12/11/2019
Jul 19 22:04:21 maturin kernel: RIP: 0010:__mutex_lock.constprop.0+0x6f/0x4a0
Jul 19 22:04:21 maturin kernel: Code: 40 01 00 00 48 39 c6 0f 84 b7 03 00 00 65 48 8b 04 25 c0 7b 01 00 48 8b 00 a8 08 75 1d 49 8b 07 48 83 e0 f8 0f 84 b0 01 00 00 <8b> 50 34 85 d2 0f 85 93 01 00 00 e8 f1 b7 50 ff 65 48 8b 04 25 c0
Jul 19 22:04:21 maturin kernel: RSP: 0018:ffffb2c57c20bc20 EFLAGS: 00010202
Jul 19 22:04:21 maturin kernel: RAX: 018b01f64c661f70 RBX: ffff8ff041d089c0 RCX: 0000000000000000
Jul 19 22:04:21 maturin kernel: RDX: 018b01f64c661f75 RSI: ffff8febba091840 RDI: ffff8ff3f98d3730
Jul 19 22:04:21 maturin kernel: RBP: ffffb2c57c20bc98 R08: 0000000000000001 R09: 0000000000000000
Jul 19 22:04:21 maturin kernel: R10: 0000000000000001 R11: 0000000000000000 R12: 0000000000000002
Jul 19 22:04:21 maturin kernel: R13: ffff8ff3f98d36e8 R14: ffff8ff3f98d3730 R15: ffff8ff3f98d3730
Jul 19 22:04:21 maturin kernel: FS:  0000000000000000(0000) GS:ffff8ffabf2c0000(0000) knlGS:0000000000000000
Jul 19 22:04:21 maturin kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Jul 19 22:04:21 maturin kernel: CR2: 00007ff1d9480da8 CR3: 00000001bdd24000 CR4: 00000000003506e0
Jul 19 22:04:21 maturin kernel: Call Trace:
Jul 19 22:04:21 maturin kernel:  __mutex_lock_slowpath+0x13/0x20
Jul 19 22:04:21 maturin kernel:  mutex_lock+0x34/0x40
Jul 19 22:04:21 maturin kernel:  zil_lwb_flush_vdevs_done+0x1ad/0x2a0 [zfs]
Jul 19 22:04:21 maturin kernel:  zio_done+0x412/0x11b0 [zfs]
Jul 19 22:04:21 maturin kernel:  zio_execute+0x89/0x130 [zfs]
Jul 19 22:04:21 maturin kernel:  taskq_thread+0x2b2/0x4f0 [spl]
Jul 19 22:04:21 maturin kernel:  ? wake_up_q+0xa0/0xa0
Jul 19 22:04:21 maturin kernel:  ? zio_subblock+0x30/0x30 [zfs]
Jul 19 22:04:21 maturin kernel:  kthread+0x12f/0x150
Jul 19 22:04:21 maturin kernel:  ? taskq_thread_spawn+0x60/0x60 [spl]
Jul 19 22:04:21 maturin kernel:  ? __kthread_bind_mask+0x70/0x70
Jul 19 22:04:21 maturin kernel:  ret_from_fork+0x22/0x30
Jul 19 22:04:21 maturin kernel: Modules linked in: tcp_diag inet_diag veth md4 cmac nls_utf8 cifs fscache libdes ebtable_filter ebtables ip_set ip6table_raw iptable_raw ip6table_filter ip6_tables iptable_filter bpfilter bonding tls softdog nfnetlink_log nfnetlink amdgpu iommu_v2 gpu_sched snd_hda_codec_realtek intel_rapl_msr snd_hda_codec_generic intel_rapl_common ledtrig_audio snd_hda_codec_hdmi snd_hda_intel snd_intel_dspcfg soundwire_intel edac_mce_amd soundwire_generic_allocation soundwire_cadence snd_hda_codec kvm_amd radeon snd_hda_core snd_hwdep kvm iwlmvm soundwire_bus drm_ttm_helper irqbypass crct10dif_pclmul ttm ghash_clmulni_intel mac80211 aesni_intel snd_soc_core drm_kms_helper crypto_simd libarc4 snd_compress cryptd cec btusb ac97_bus glue_helper rc_core snd_pcm_dmaengine btrtl snd_pcm fb_sys_fops btbcm snd_timer syscopyarea btintel input_leds rapl iwlwifi snd sysfillrect wmi_bmof pcspkr efi_pstore mxm_wmi k10temp ccp sysimgblt soundcore bluetooth cfg80211 ecdh_generic ecc mac_hid vhost_net
Jul 19 22:04:21 maturin kernel:  vhost vhost_iotlb tap ib_iser rdma_cm iw_cm ib_cm ib_core iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi drm sunrpc ip_tables x_tables autofs4 zfs(PO) zunicode(PO) zzstd(O) zlua(O) zavl(PO) icp(PO) zcommon(PO) znvpair(PO) spl(O) btrfs blake2b_generic xor raid6_pq libcrc32c usbkbd hid_generic usbmouse usbhid hid xhci_pci crc32_pclmul igb xhci_pci_renesas atlantic i2c_algo_bit gpio_amdpt i2c_piix4 ahci macsec dca xhci_hcd libahci wmi gpio_generic
Jul 19 22:04:21 maturin kernel: ---[ end trace 1a90537c325894e4 ]---
Jul 19 22:04:21 maturin kernel: RIP: 0010:__mutex_lock.constprop.0+0x6f/0x4a0
Jul 19 22:04:21 maturin kernel: Code: 40 01 00 00 48 39 c6 0f 84 b7 03 00 00 65 48 8b 04 25 c0 7b 01 00 48 8b 00 a8 08 75 1d 49 8b 07 48 83 e0 f8 0f 84 b0 01 00 00 <8b> 50 34 85 d2 0f 85 93 01 00 00 e8 f1 b7 50 ff 65 48 8b 04 25 c0
Jul 19 22:04:21 maturin kernel: RSP: 0018:ffffb2c57c20bc20 EFLAGS: 00010202
Jul 19 22:04:21 maturin kernel: RAX: 018b01f64c661f70 RBX: ffff8ff041d089c0 RCX: 0000000000000000
Jul 19 22:04:21 maturin kernel: RDX: 018b01f64c661f75 RSI: ffff8febba091840 RDI: ffff8ff3f98d3730
Jul 19 22:04:21 maturin kernel: RBP: ffffb2c57c20bc98 R08: 0000000000000001 R09: 0000000000000000
Jul 19 22:04:21 maturin kernel: R10: 0000000000000001 R11: 0000000000000000 R12: 0000000000000002
Jul 19 22:04:21 maturin kernel: R13: ffff8ff3f98d36e8 R14: ffff8ff3f98d3730 R15: ffff8ff3f98d3730
Jul 19 22:04:21 maturin kernel: FS:  0000000000000000(0000) GS:ffff8ffabf2c0000(0000) knlGS:0000000000000000
Jul 19 22:04:21 maturin kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Jul 19 22:04:21 maturin kernel: CR2: 00007ff1d9480da8 CR3: 00000001bdd24000 CR4: 00000000003506e0
Jul 19 22:04:46 maturin pvedaemon[85940]: <root@pam> end task UPID:maturin:002855DC:00028021:60F67443:vncproxy:100:root@pam: OK
Jul 19 22:04:46 maturin pvedaemon[3541602]: starting vnc proxy UPID:maturin:00360A62:000301AC:60F6758E:vncproxy:101:root@pam:
Jul 19 22:04:46 maturin pvedaemon[85941]: <root@pam> starting task UPID:maturin:00360A62:000301AC:60F6758E:vncproxy:101:root@pam:
Jul 19 22:04:48 maturin pvedaemon[85941]: <root@pam> end task UPID:maturin:00360A62:000301AC:60F6758E:vncproxy:101:root@pam: OK
Jul 19 22:04:48 maturin pvedaemon[3541605]: starting vnc proxy UPID:maturin:00360A65:00030275:60F67590:vncproxy:102:root@pam:
Jul 19 22:04:48 maturin pvedaemon[85941]: <root@pam> starting task UPID:maturin:00360A65:00030275:60F67590:vncproxy:102:root@pam:
Jul 19 22:04:50 maturin pvedaemon[85941]: <root@pam> end task UPID:maturin:00360A65:00030275:60F67590:vncproxy:102:root@pam: OK
Jul 19 22:04:50 maturin pvedaemon[3541691]: starting vnc proxy UPID:maturin:00360ABB:0003030B:60F67592:vncproxy:101:root@pam:
Jul 19 22:04:50 maturin pvedaemon[85941]: <root@pam> starting task UPID:maturin:00360ABB:0003030B:60F67592:vncproxy:101:root@pam:
Jul 19 22:04:53 maturin pvedaemon[85941]: <root@pam> end task UPID:maturin:00360ABB:0003030B:60F67592:vncproxy:101:root@pam: OK
Jul 19 22:04:53 maturin pvedaemon[3541718]: starting vnc proxy UPID:maturin:00360AD6:0003045F:60F67595:vncproxy:100:root@pam:
Jul 19 22:04:53 maturin pvedaemon[85941]: <root@pam> starting task UPID:maturin:00360AD6:0003045F:60F67595:vncproxy:100:root@pam:
Jul 19 22:04:56 maturin pvedaemon[85941]: <root@pam> end task UPID:maturin:00360AD6:0003045F:60F67595:vncproxy:100:root@pam: OK
Jul 19 22:05:00 maturin systemd[1]: Starting Proxmox VE replication runner...
Jul 19 22:05:00 maturin systemd[1]: pvesr.service: Succeeded.
Jul 19 22:05:00 maturin systemd[1]: Finished Proxmox VE replication runner.
Jul 19 22:05:16 maturin pvestatd[85917]: VM 106 qmp command failed - VM 106 qmp command 'query-proxmox-support' failed - got timeout
Jul 19 22:05:16 maturin pvestatd[85917]: status update time (6.220 seconds)
Jul 19 22:05:27 maturin pvestatd[85917]: VM 106 qmp command failed - VM 106 qmp command 'query-proxmox-support' failed - unable to connect to VM 106 qmp socket - timeout after 31 retries
Jul 19 22:05:30 maturin pvestatd[85917]: VM 107 qmp command failed - VM 107 qmp command 'query-proxmox-support' failed - got timeout
Jul 19 22:05:33 maturin pvestatd[85917]: VM 100 qmp command failed - VM 100 qmp command 'query-proxmox-support' failed - got timeout
Jul 19 22:05:33 maturin pvestatd[85917]: status update time (12.254 seconds)
Jul 19 22:05:42 maturin pvestatd[85917]: VM 100 qmp command failed - VM 100 qmp command 'query-proxmox-support' failed - unable to connect to VM 100 qmp socket - timeout after 31 retries
Jul 19 22:05:45 maturin pvestatd[85917]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - got timeout
Jul 19 22:05:48 maturin pvestatd[85917]: VM 107 qmp command failed - VM 107 qmp command 'query-proxmox-support' failed - unable to connect to VM 107 qmp socket - timeout after 31 retries
Jul 19 22:05:51 maturin pvestatd[85917]: VM 108 qmp command failed - VM 108 qmp command 'query-proxmox-support' failed - got timeout
Jul 19 22:05:54 maturin pvestatd[85917]: VM 106 qmp command failed - VM 106 qmp command 'query-proxmox-support' failed - unable to connect to VM 106 qmp socket - timeout after 31 retries
Jul 19 22:05:54 maturin pvestatd[85917]: status update time (21.258 seconds)
Jul 19 22:06:00 maturin systemd[1]: Starting Proxmox VE replication runner...
Jul 19 22:06:00 maturin systemd[1]: pvesr.service: Succeeded.
Jul 19 22:06:00 maturin systemd[1]: Finished Proxmox VE replication runner.
Jul 19 22:06:01 maturin cron[85899]: (*system*vzdump) RELOAD (/etc/cron.d/vzdump)
Jul 19 22:06:09 maturin pvestatd[85917]: VM 100 qmp command failed - VM 100 qmp command 'query-proxmox-support' failed - unable to connect to VM 100 qmp socket - timeout after 31 retries
Jul 19 22:06:12 maturin pvestatd[85917]: VM 108 qmp command failed - VM 108 qmp command 'query-proxmox-support' failed - unable to connect to VM 108 qmp socket - timeout after 31 retries
Jul 19 22:06:13 maturin pveproxy[85951]: worker exit
Jul 19 22:06:13 maturin pveproxy[85950]: worker 85951 finished
Jul 19 22:06:13 maturin pveproxy[85950]: starting 1 worker(s)
Jul 19 22:06:13 maturin pveproxy[85950]: worker 3542681 started
Jul 19 22:06:15 maturin pvestatd[85917]: VM 106 qmp command failed - VM 106 qmp command 'query-proxmox-support' failed - unable to connect to VM 106 qmp socket - timeout after 31 retries
Jul 19 22:06:17 maturin pvedaemon[3542720]: starting vnc proxy UPID:maturin:00360EC0:00032514:60F675E9:vncproxy:100:root@pam:
Jul 19 22:06:17 maturin pvedaemon[85941]: <root@pam> starting task UPID:maturin:00360EC0:00032514:60F675E9:vncproxy:100:root@pam:
Jul 19 22:06:18 maturin pvestatd[85917]: VM 107 qmp command failed - VM 107 qmp command 'query-proxmox-support' failed - unable to connect to VM 107 qmp socket - timeout after 31 retries
Jul 19 22:06:21 maturin qm[3542722]: VM 100 qmp command failed - VM 100 qmp command 'set_password' failed - unable to connect to VM 100 qmp socket - timeout after 31 retries
Jul 19 22:06:21 maturin pvedaemon[3542720]: Failed to run vncproxy.
Jul 19 22:06:21 maturin pvedaemon[85941]: <root@pam> end task UPID:maturin:00360EC0:00032514:60F675E9:vncproxy:100:root@pam: Failed to run vncproxy.
Jul 19 22:06:21 maturin pvestatd[85917]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 31 retries
Jul 19 22:06:21 maturin pvestatd[85917]: status update time (27.284 seconds)
Jul 19 22:06:23 maturin pvedaemon[85940]: VM 100 qmp command failed - VM 100 qmp command 'query-proxmox-support' failed - unable to connect to VM 100 qmp socket - timeout after 31 retries
Jul 19 22:06:39 maturin pvestatd[85917]: VM 108 qmp command failed - VM 108 qmp command 'query-proxmox-support' failed - unable to connect to VM 108 qmp socket - timeout after 31 retries
Jul 19 22:06:42 maturin pvestatd[85917]: VM 106 qmp command failed - VM 106 qmp command 'query-proxmox-support' failed - unable to connect to VM 106 qmp socket - timeout after 31 retries
Jul 19 22:06:42 maturin pvedaemon[85940]: VM 100 qmp command failed - VM 100 qmp command 'query-proxmox-support' failed - unable to connect to VM 100 qmp socket - timeout after 31 retries
Jul 19 22:06:45 maturin pvestatd[85917]: VM 107 qmp command failed - VM 107 qmp command 'query-proxmox-support' failed - unable to connect to VM 107 qmp socket - timeout after 31 retries
Jul 19 22:06:48 maturin pvestatd[85917]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 31 retries
Jul 19 22:06:51 maturin pvestatd[85917]: VM 100 qmp command failed - VM 100 qmp command 'query-proxmox-support' failed - unable to connect to VM 100 qmp socket - timeout after 31 retries
Jul 19 22:06:52 maturin pvestatd[85917]: status update time (30.291 seconds)
Jul 19 22:07:00 maturin systemd[1]: Starting Proxmox VE replication runner...
Jul 19 22:07:00 maturin systemd[1]: pvesr.service: Succeeded.
Jul 19 22:07:00 maturin systemd[1]: Finished Proxmox VE replication runner.
Jul 19 22:07:02 maturin pvedaemon[85942]: VM 100 qmp command failed - VM 100 qmp command 'query-proxmox-support' failed - unable to connect to VM 100 qmp socket - timeout after 31 retries
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!