Hello,
I'm new to Proxmox and I'm looking for some help for an issue that I'm struggling to troubleshoot.
I have been running omv on Proxmox installed on a mini pc for a few months without issues. I decided to set up a new rsync jobs to better fit my use case and now the system is crashing every time I attempt to run this new job. The job will begin and will copy roughly 300-600MB, then it hangs at a seemingly random file, at which point the system powers off. This only seems to happen on large file transfers (my new rsync job needs to transfer ~700GB). Older, pre-existing rsync jobs with only minor changes execute without issues. Nothing has changed in the past two months with my setup other than installing updates.
My setup details:
I'm not sure what else to try next. The only change, aside from system updates, is the creation of the new rsync job. The job itself does not have any abnormal configuration. The only difference that I can see is that the system crashes when there is a larger amount of data to transfer.
Some of the final entries that I see in the omv VM syslog just before the system powers off include:
I'm new to Proxmox and I'm looking for some help for an issue that I'm struggling to troubleshoot.
I have been running omv on Proxmox installed on a mini pc for a few months without issues. I decided to set up a new rsync jobs to better fit my use case and now the system is crashing every time I attempt to run this new job. The job will begin and will copy roughly 300-600MB, then it hangs at a seemingly random file, at which point the system powers off. This only seems to happen on large file transfers (my new rsync job needs to transfer ~700GB). Older, pre-existing rsync jobs with only minor changes execute without issues. Nothing has changed in the past two months with my setup other than installing updates.
My setup details:
- pve-manager/7.4x
- Kernel Linux 5.15.102-1
- My hard drives are all SSD and I'm passing them through from Proxmox with the "Use USB port" option.
- My VM has 1 proc and 2 GB of memory.
- My host has a 65w power supply
- Running omv v6.3.6-2
I'm not sure what else to try next. The only change, aside from system updates, is the creation of the new rsync job. The job itself does not have any abnormal configuration. The only difference that I can see is that the system crashes when there is a larger amount of data to transfer.
Some of the final entries that I see in the omv VM syslog just before the system powers off include:
Code:
3/30/2023, 10:51:04 AM kernel: [ 2.036622] sd 2:0:0:0: Power-on or device reset occurred
3/30/2023, 10:51:04 AM kernel: [ 2.035915] usb 1-1: new full-speed USB device number 2 using uhci_hcd
3/30/2023, 10:51:04 AM kernel: [ 1.886979] sr 1:0:0:0: Attached scsi CD-ROM sr0
3/30/2023, 10:51:04 AM kernel: [ 1.871740] virtio_net virtio2 ens18: renamed from eth0
3/30/2023, 10:51:04 AM kernel: [ 1.799014] ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
3/30/2023, 10:51:04 AM kernel: [ 1.868102] cdrom: Uniform CD-ROM driver Revision: 3.20
3/30/2023, 10:51:04 AM kernel: [ 2.037102] sd 2:0:0:0: [sda] 62914560 512-byte logical blocks: (32.2 GB/30.0 GiB)
3/30/2023, 10:51:04 AM kernel: [ 1.848028] scsi 2:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5
3/30/2023, 10:51:04 AM kernel: [ 1.847439] scsi host2: Virtio SCSI HBA
3/30/2023, 10:51:04 AM kernel: [ 1.800216] scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5
3/30/2023, 10:50:22 AM monit[614]: 'omv' loadavg (1min) of 2.4 matches resource limit [loadavg (1min) > 2.0]
3/30/2023, 10:49:56 AM kernel: [ 132.301237] sd 3:0:0:0: [sdb] tag#19 uas_eh_abort_handler 0 uas-tag 22 inflight: CMD IN
3/30/2023, 10:49:56 AM kernel: [ 132.301242] sd 3:0:0:0: [sdb] tag#19 CDB: Read(10) 28 00 3f 5a 48 48 00 04 00 00
3/30/2023, 10:49:56 AM kernel: [ 132.298471] sd 3:0:0:0: [sdb] tag#20 uas_eh_abort_handler 0 uas-tag 23 inflight: CMD IN
3/30/2023, 10:49:56 AM kernel: [ 132.303837] sd 3:0:0:0: [sdb] tag#18 uas_eh_abort_handler 0 uas-tag 9 inflight: CMD IN
3/30/2023, 10:49:56 AM kernel: [ 132.298477] sd 3:0:0:0: [sdb] tag#20 CDB: Read(10) 28 00 3f 5a 4c 48 00 04 00 00
3/30/2023, 10:49:56 AM kernel: [ 132.303844] sd 3:0:0:0: [sdb] tag#18 CDB: Read(10) 28 00 3f 5a 43 88 00 04 00 00
3/30/2023, 10:49:56 AM kernel: [ 132.268729] sd 3:0:0:0: [sdb] tag#29 uas_eh_abort_handler 0 uas-tag 24 inflight: CMD IN