Backup Job failed with err -5 input/output failure. is my SSD failing?

Iacov

Member
Jan 24, 2024
38
0
6
hey

i use a minifsforum n100 mini pc with a wd red 500gb nas ssd

every once in a while i get (only on this node) a failed backup job for my pihole

the log states:
Code:
Nov 08 04:38:00 pve2 systemd[1]: 210.scope: Deactivated successfully.
Nov 08 04:38:00 pve2 systemd[1]: 210.scope: Consumed 1h 14min 50.297s CPU time.
Nov 08 04:38:00 pve2 systemd[1]: Started 210.scope.
Nov 08 04:38:01 pve2 qmeventd[2188885]: Starting cleanup for 210
Nov 08 04:38:01 pve2 qmeventd[2188885]: trying to acquire lock...
Nov 08 04:38:01 pve2 kernel: tap210i0: entered promiscuous mode
Nov 08 04:38:01 pve2 kernel: vmbr0: port 3(fwpr210p0) entered blocking state
Nov 08 04:38:01 pve2 kernel: vmbr0: port 3(fwpr210p0) entered disabled state
Nov 08 04:38:01 pve2 kernel: fwpr210p0: entered allmulticast mode
Nov 08 04:38:01 pve2 kernel: fwpr210p0: entered promiscuous mode
Nov 08 04:38:01 pve2 kernel: vmbr0: port 3(fwpr210p0) entered blocking state
Nov 08 04:38:01 pve2 kernel: vmbr0: port 3(fwpr210p0) entered forwarding state
Nov 08 04:38:01 pve2 kernel: fwbr210i0: port 1(fwln210i0) entered blocking state
Nov 08 04:38:01 pve2 kernel: fwbr210i0: port 1(fwln210i0) entered disabled state
Nov 08 04:38:01 pve2 kernel: fwln210i0: entered allmulticast mode
Nov 08 04:38:01 pve2 kernel: fwln210i0: entered promiscuous mode
Nov 08 04:38:01 pve2 kernel: fwbr210i0: port 1(fwln210i0) entered blocking state
Nov 08 04:38:01 pve2 kernel: fwbr210i0: port 1(fwln210i0) entered forwarding state
Nov 08 04:38:01 pve2 kernel: fwbr210i0: port 2(tap210i0) entered blocking state
Nov 08 04:38:01 pve2 kernel: fwbr210i0: port 2(tap210i0) entered disabled state
Nov 08 04:38:01 pve2 kernel: tap210i0: entered allmulticast mode
Nov 08 04:38:01 pve2 kernel: fwbr210i0: port 2(tap210i0) entered blocking state
Nov 08 04:38:01 pve2 kernel: fwbr210i0: port 2(tap210i0) entered forwarding state
Nov 08 04:38:01 pve2 qmeventd[2188885]:  OK
Nov 08 04:38:01 pve2 qmeventd[2188885]: vm still running
Nov 08 04:38:07 pve2 kernel: kvm: kvm [2188895]: ignored rdmsr: 0xc0011029 data 0x0
Nov 08 04:38:08 pve2 kernel: device-mapper: btree spine: node_check failed: blocknr 0 != wanted 9666
Nov 08 04:38:08 pve2 kernel: device-mapper: block manager: btree_node validator check failed for block 9666
Nov 08 04:38:08 pve2 kernel: device-mapper: thin: process_cell: dm_thin_find_block() failed: error = -15
Nov 08 04:38:08 pve2 kernel: device-mapper: btree spine: node_check failed: blocknr 0 != wanted 9666
Nov 08 04:38:08 pve2 kernel: device-mapper: block manager: btree_node validator check failed for block 9666
Nov 08 04:38:08 pve2 kernel: device-mapper: btree spine: node_check failed: blocknr 0 != wanted 9666
Nov 08 04:38:08 pve2 kernel: device-mapper: block manager: btree_node validator check failed for block 9666
Nov 08 04:38:08 pve2 kernel: device-mapper: btree spine: node_check failed: blocknr 0 != wanted 9666
Nov 08 04:38:08 pve2 kernel: device-mapper: block manager: btree_node validator check failed for block 9666
Nov 08 04:38:08 pve2 kernel: device-mapper: btree spine: node_check failed: blocknr 0 != wanted 9666
Nov 08 04:38:08 pve2 kernel: device-mapper: block manager: btree_node validator check failed for block 9666
Nov 08 04:38:08 pve2 kernel: device-mapper: btree spine: node_check failed: blocknr 0 != wanted 9666
Nov 08 04:38:08 pve2 kernel: device-mapper: block manager: btree_node validator check failed for block 9666
Nov 08 04:38:08 pve2 kernel: device-mapper: btree spine: node_check failed: blocknr 0 != wanted 9666
Nov 08 04:38:08 pve2 kernel: device-mapper: block manager: btree_node validator check failed for block 9666
Nov 08 04:38:09 pve2 kernel: device-mapper: btree spine: node_check failed: blocknr 0 != wanted 9639
Nov 08 04:38:09 pve2 kernel: device-mapper: block manager: btree_node validator check failed for block 9639
Nov 08 04:38:09 pve2 kernel: device-mapper: thin: process_cell: dm_thin_find_block() failed: error = -15
Nov 08 04:38:09 pve2 kernel: device-mapper: btree spine: node_check failed: blocknr 0 != wanted 9639
Nov 08 04:38:09 pve2 kernel: device-mapper: block manager: btree_node validator check failed for block 9639
Nov 08 04:38:09 pve2 kernel: device-mapper: thin: process_cell: dm_thin_find_block() failed: error = -15
Nov 08 04:38:09 pve2 kernel: device-mapper: btree spine: node_check failed: blocknr 0 != wanted 9639
Nov 08 04:38:09 pve2 kernel: device-mapper: block manager: btree_node validator check failed for block 9639
Nov 08 04:38:09 pve2 kernel: device-mapper: thin: process_cell: dm_thin_find_block() failed: error = -15
Nov 08 04:38:09 pve2 kernel: device-mapper: thin: process_cell: dm_thin_find_block() failed: error = -15
Nov 08 04:38:09 pve2 kernel: device-mapper: thin: process_cell: dm_thin_find_block() failed: error = -15
Nov 08 04:38:09 pve2 kernel: device-mapper: thin: process_cell: dm_thin_find_block() failed: error = -15
Nov 08 04:38:09 pve2 kernel: device-mapper: thin: process_cell: dm_thin_find_block() failed: error = -15
Nov 08 04:38:09 pve2 kernel: device-mapper: thin: process_cell: dm_thin_find_block() failed: error = -15
Nov 08 04:38:09 pve2 kernel: device-mapper: thin: process_cell: dm_thin_find_block() failed: error = -15
Nov 08 04:38:13 pve2 pvescheduler[2187019]: ERROR: Backup of VM 210 failed - job failed with err -5 - Input/output error
Nov 08 04:38:13 pve2 pvescheduler[2187019]: INFO: Backup job finished with errors

i don't understand the log
is there an issue with connectivity? or is my ssd failing?
the failed blocks would make me lean towards failing ssd - but i can't check, because proxmox uses smart 7.3, but i need 7.4 for ssd tests
it could also be a firmware problem - but my other proxmox node (a minisforum ryzen 5625u mini pc) runs with no issues and the same type of ssd and apparently the same firmware, if smart is to be trusted

does anyone have an idea what the log might try to tell me?

edit:
thats the smart stats
Code:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  5 Reallocated_Sector_Ct   0x0032   100   100   ---    Old_age   Always       -       0
  9 Power_On_Hours          0x0032   100   100   ---    Old_age   Always       -       6337
 12 Power_Cycle_Count       0x0032   100   100   ---    Old_age   Always       -       21
165 Block_Erase_Count       0x0032   100   100   ---    Old_age   Always       -       12058733
166 Minimum_PE_Cycles_TLC   0x0032   100   100   ---    Old_age   Always       -       1
167 Max_Bad_Blocks_per_Die  0x0032   100   100   ---    Old_age   Always       -       36
168 Maximum_PE_Cycles_TLC   0x0032   100   100   ---    Old_age   Always       -       3
169 Total_Bad_Blocks        0x0032   100   100   ---    Old_age   Always       -       197
170 Grown_Bad_Blocks        0x0032   100   100   ---    Old_age   Always       -       0
171 Program_Fail_Count      0x0032   100   100   ---    Old_age   Always       -       0
172 Erase_Fail_Count        0x0032   100   100   ---    Old_age   Always       -       0
173 Average_PE_Cycles_TLC   0x0032   100   100   ---    Old_age   Always       -       2
174 Unexpected_Power_Loss   0x0032   100   100   ---    Old_age   Always       -       0
184 End-to-End_Error        0x0032   100   100   ---    Old_age   Always       -       0
187 Reported_Uncorrect      0x0032   100   100   ---    Old_age   Always       -       0
188 Command_Timeout         0x0032   100   100   ---    Old_age   Always       -       0
194 Temperature_Celsius     0x0022   045   059   ---    Old_age   Always       -       55 (Min/Max 28/59)
199 UDMA_CRC_Error_Count    0x0032   100   100   ---    Old_age   Always       -       0
230 Media_Wearout_Indicator 0x0032   001   001   ---    Old_age   Always       -       0x001000140014
232 Available_Reservd_Space 0x0033   100   100   004    Pre-fail  Always       -       100
233 NAND_GB_Written_TLC     0x0032   100   100   ---    Old_age   Always       -       1104
234 NAND_GB_Written_SLC     0x0032   100   100   ---    Old_age   Always       -       1128
241 Host_Writes_GiB         0x0030   253   253   ---    Old_age   Offline      -       1117
242 Host_Reads_GiB          0x0030   253   253   ---    Old_age   Offline      -       2
244 Temp_Throttle_Status    0x0032   000   100   ---    Old_age   Always       -       0
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!