Read-only filesystem at the end of backup restore

Kevin Smith

Active Member
Jan 15, 2018
8
2
43
Hello,
When I try to restore backup from mounted QNAP drive at the end of operation I've go error:
"Read-only filesystem" (please take a look at attached photo).
All PROXMOX server becomes unavailable.

It's a fresh installation and I don't have any VM yet. I've tried to reinstall PROXMOX but it doesn't help.
Problem occurs every time on 7.3-3 but also on 7.2 version.

Restore task has unexpexted status.

Backup has more than 120GB. I've checked free space on target storage and it looks ok.

Below you can find restore output.
Bash:
restore vma archive: lzop -d -c /mnt/pve/backupqnap7/dump/vzdump-qemu-104-2022_12_09-23_09_32.vma.lzo | vma extract -v -r /var/tmp/vzdumptmp327116.fifo - /var/tmp/vzdumptmp327116
CFG: size: 412 name: qemu-server.conf
DEV: dev_id=1 size: 1073741824000 devname: drive-ide0
CTIME: Fri Dec  9 23:09:34 2022
  Wiping dos signature on /dev/dysk_talerz/vm-104-disk-0.
  Wiping atari signature on /dev/dysk_talerz/vm-104-disk-0.
  Logical volume "vm-104-disk-0" created.
new volume ID is 'dysk_talerz:vm-104-disk-0'
map 'drive-ide0' to '/dev/dysk_talerz/vm-104-disk-0' (write zeros = 1)
progress 1% (read 10737418240 bytes, duration 53 sec)
progress 2% (read 21474836480 bytes, duration 114 sec)
progress 3% (read 32212254720 bytes, duration 157 sec)
progress 4% (read 42949672960 bytes, duration 211 sec)
progress 5% (read 53687091200 bytes, duration 246 sec)
progress 6% (read 64424509440 bytes, duration 273 sec)
progress 7% (read 75161927680 bytes, duration 303 sec)
progress 8% (read 85899345920 bytes, duration 339 sec)
progress 9% (read 96636764160 bytes, duration 371 sec)
progress 10% (read 107374182400 bytes, duration 415 sec)
progress 11% (read 118111600640 bytes, duration 449 sec)
progress 12% (read 128849018880 bytes, duration 472 sec)
progress 13% (read 139586437120 bytes, duration 493 sec)
progress 14% (read 150323855360 bytes, duration 514 sec)
progress 15% (read 161061273600 bytes, duration 534 sec)
progress 16% (read 171798691840 bytes, duration 580 sec)
progress 17% (read 182536110080 bytes, duration 650 sec)
progress 18% (read 193273528320 bytes, duration 681 sec)
progress 19% (read 204010946560 bytes, duration 703 sec)
progress 20% (read 214748364800 bytes, duration 724 sec)
progress 21% (read 225485783040 bytes, duration 746 sec)
progress 22% (read 236223201280 bytes, duration 782 sec)
progress 23% (read 246960619520 bytes, duration 854 sec)
progress 24% (read 257698037760 bytes, duration 944 sec)
progress 25% (read 268435456000 bytes, duration 1034 sec)
progress 26% (read 279172874240 bytes, duration 1121 sec)
progress 27% (read 289910292480 bytes, duration 1214 sec)
progress 28% (read 300647710720 bytes, duration 1308 sec)
progress 29% (read 311385128960 bytes, duration 1402 sec)
progress 30% (read 322122547200 bytes, duration 1496 sec)
progress 31% (read 332859965440 bytes, duration 1595 sec)
progress 32% (read 343597383680 bytes, duration 1694 sec)
progress 33% (read 354334801920 bytes, duration 1792 sec)
progress 34% (read 365072220160 bytes, duration 1884 sec)
progress 35% (read 375809638400 bytes, duration 1978 sec)
progress 36% (read 386547056640 bytes, duration 2072 sec)
progress 37% (read 397284474880 bytes, duration 2168 sec)
progress 38% (read 408021893120 bytes, duration 2267 sec)
progress 39% (read 418759311360 bytes, duration 2357 sec)
progress 40% (read 429496729600 bytes, duration 2447 sec)
progress 41% (read 440234147840 bytes, duration 2539 sec)
progress 42% (read 450971566080 bytes, duration 2634 sec)
progress 43% (read 461708984320 bytes, duration 2730 sec)
progress 44% (read 472446402560 bytes, duration 2831 sec)
progress 45% (read 483183820800 bytes, duration 2928 sec)
progress 46% (read 493921239040 bytes, duration 3029 sec)
progress 47% (read 504658657280 bytes, duration 3128 sec)
progress 48% (read 515396075520 bytes, duration 3225 sec)
progress 49% (read 526133493760 bytes, duration 3320 sec)
progress 50% (read 536870912000 bytes, duration 3417 sec)
progress 51% (read 547608330240 bytes, duration 3511 sec)
progress 52% (read 558345748480 bytes, duration 3610 sec)
progress 53% (read 569083166720 bytes, duration 3707 sec)
progress 54% (read 579820584960 bytes, duration 3801 sec)
progress 55% (read 590558003200 bytes, duration 3894 sec)
progress 56% (read 601295421440 bytes, duration 3990 sec)
progress 57% (read 612032839680 bytes, duration 4081 sec)
progress 58% (read 622770257920 bytes, duration 4171 sec)
progress 59% (read 633507676160 bytes, duration 4263 sec)
progress 60% (read 644245094400 bytes, duration 4358 sec)
progress 61% (read 654982512640 bytes, duration 4450 sec)
progress 62% (read 665719930880 bytes, duration 4546 sec)
progress 63% (read 676457349120 bytes, duration 4637 sec)
progress 64% (read 687194767360 bytes, duration 4731 sec)
progress 65% (read 697932185600 bytes, duration 4828 sec)
progress 66% (read 708669603840 bytes, duration 4924 sec)
progress 67% (read 719407022080 bytes, duration 5018 sec)
progress 68% (read 730144440320 bytes, duration 5108 sec)
progress 69% (read 740881858560 bytes, duration 5201 sec)
progress 70% (read 751619276800 bytes, duration 5301 sec)
progress 71% (read 762356695040 bytes, duration 5398 sec)
progress 72% (read 773094113280 bytes, duration 5488 sec)
progress 73% (read 783831531520 bytes, duration 5582 sec)
progress 74% (read 794568949760 bytes, duration 5679 sec)
progress 75% (read 805306368000 bytes, duration 5775 sec)
progress 76% (read 816043786240 bytes, duration 5872 sec)
progress 77% (read 826781204480 bytes, duration 5971 sec)
progress 78% (read 837518622720 bytes, duration 6069 sec)
progress 79% (read 848256040960 bytes, duration 6165 sec)
progress 80% (read 858993459200 bytes, duration 6262 sec)
progress 81% (read 869730877440 bytes, duration 6358 sec)
progress 82% (read 880468295680 bytes, duration 6451 sec)
progress 83% (read 891205713920 bytes, duration 6548 sec)
progress 84% (read 901943132160 bytes, duration 6647 sec)
progress 85% (read 912680550400 bytes, duration 6744 sec)
progress 86% (read 923417968640 bytes, duration 6843 sec)
progress 87% (read 934155386880 bytes, duration 6936 sec)
progress 88% (read 944892805120 bytes, duration 7025 sec)
progress 89% (read 955630223360 bytes, duration 7116 sec)
progress 90% (read 966367641600 bytes, duration 7210 sec)
progress 91% (read 977105059840 bytes, duration 7306 sec)
progress 92% (read 987842478080 bytes, duration 7396 sec)
progress 93% (read 998579896320 bytes, duration 7486 sec)
progress 94% (read 1009317314560 bytes, duration 7576 sec)
progress 95% (read 1020054732800 bytes, duration 7677 sec)
progress 96% (read 1030792151040 bytes, duration 7774 sec)
progress 97% (read 1041529569280 bytes, duration 7870 sec)
progress 98% (read 1052266987520 bytes, duration 7965 sec)
progress 99% (read 1063004405760 bytes, duration 8059 sec)
progress 100% (read 1073741824000 bytes, duration 8154 sec)
 

Attachments

  • Resized_20221212_085036.JPEG
    Resized_20221212_085036.JPEG
    284.4 KB · Views: 14
This looks like the target storage is full.
Can you provide the output of `df -h` of the host?
And please check the syslog for I/O errors.
 
Unfortunatelly when such error occurs Proxmox hangs and I have to restart whole Proxmox.

I've tried to restore backup to different storage and it worked. Then I tried to move disk to storage I previously tired restore and I've got error:
Code:
create full clone of drive ide0 (local-lvm:vm-100-disk-0)
  Logical volume "vm-100-disk-0" created.
transferred 0.0 B of 1000.0 GiB (0.00%)
transferred 10.0 GiB of 1000.0 GiB (1.00%)
transferred 20.0 GiB of 1000.0 GiB (2.00%)
transferred 30.0 GiB of 1000.0 GiB (3.00%)
transferred 40.0 GiB of 1000.0 GiB (4.00%)
transferred 50.0 GiB of 1000.0 GiB (5.00%)
transferred 60.0 GiB of 1000.0 GiB (6.00%)
qemu-img: error while writing at byte 72024571392: Input/output error
qemu-img: error while reading at byte 72039251456: Input/output error
  Volume group "dysk_talerz" not found
can't activate LV 'dysk_talerz/vm-100-disk-0' to zero-out its data:   Cannot process volume group dysk_talerz
TASK ERROR: storage migration failed: copy failed: command '/usr/bin/qemu-img convert -p -n -f raw -O raw /dev/pve/vm-100-disk-0 /dev/dysk_talerz/vm-100-disk-0' failed: exit code 1

After that system is in read-only state and I have to restart Proxmox.

Storage isn't full. The server is brand new. What else can I check ?

I will send "df -h" and check syslog for I/O errors later on and will update this post.
 
Check your disk. There could also be issues with the disk, since even the VG somehow disappeared in the middle of the copy?
 
Please take a look at information below. I've also attached detailed logs.


Code:
root@server_6:~# df -h
Filesystem                      Size  Used Avail Use% Mounted on
udev                            252G     0  252G   0% /dev
tmpfs                            51G   16M   51G   1% /run
/dev/mapper/pve-root             94G  2.7G   87G   3% /
tmpfs                           252G   46M  252G   1% /dev/shm
tmpfs                           5.0M     0  5.0M   0% /run/lock
/dev/sda2                       511M  336K  511M   1% /boot/efi
/dev/fuse                       128M   16K  128M   1% /etc/pve
//192.168.200.7/proxmox_backup  4.0T  426G  3.6T  11% /mnt/pve/backupqnap7
tmpfs                            51G     0   51G   0% /run/user/0


Below you can find information that worries me a little:
Code:
[    4.710488] megaraid_sas 0000:43:00.0: FW provided supportMaxExtLDs: 0    max_lds: 32
[    4.710495] megaraid_sas 0000:43:00.0: controller type    : iMR(0MB)
[    4.710498] megaraid_sas 0000:43:00.0: Online Controller Reset(OCR)    : Enabled
[    4.710501] megaraid_sas 0000:43:00.0: Secure JBOD support    : Yes
[    4.710503] megaraid_sas 0000:43:00.0: NVMe passthru support    : Yes
[    4.710505] megaraid_sas 0000:43:00.0: FW provided TM TaskAbort/Reset timeout    : 6 secs/60 secs
[    4.710507] megaraid_sas 0000:43:00.0: JBOD sequence map support    : Yes
[    4.710509] megaraid_sas 0000:43:00.0: PCI Lane Margining support    : No
[    4.738542] ================================================================================
[    4.738550] fbcon: Taking over console
[    4.738554] UBSAN: array-index-out-of-bounds in drivers/scsi/megaraid/megaraid_sas_fp.c:103:32
[    4.738561] index 1 is out of range for type 'MR_LD_SPAN_MAP [1]'
[    4.738566] CPU: 0 PID: 402 Comm: kworker/0:2 Not tainted 5.15.74-1-pve #1
[    4.738571] Hardware name: Epsylon Super Server/H11DSi-NT, BIOS 2.4 12/28/2021
[    4.738573] Workqueue: events work_for_cpu_fn
[    4.738583] Call Trace:
[    4.738586]  <TASK>
[    4.738589]  dump_stack_lvl+0x4a/0x63
[    4.738596]  dump_stack+0x10/0x16
[    4.738599]  ubsan_epilogue+0x9/0x49
[    4.738603]  __ubsan_handle_out_of_bounds.cold+0x44/0x49
[    4.738606]  ? del_timer_sync+0x6c/0xb0
[    4.738613]  mr_update_load_balance_params+0xbe/0xd0 [megaraid_sas]
[    4.738630]  MR_ValidateMapInfo+0x1f0/0xe50 [megaraid_sas]
[    4.738640]  ? __bpf_trace_tick_stop+0x20/0x20
[    4.738645]  ? wait_and_poll+0x5c/0xc0 [megaraid_sas]
[    4.738656]  ? megasas_issue_polled+0x5d/0x70 [megaraid_sas]
[    4.738668]  megasas_init_adapter_fusion+0xb11/0xc90 [megaraid_sas]
[    4.738679]  megasas_probe_one.cold+0xbfd/0x195d [megaraid_sas]
[    4.738692]  ? finish_task_switch.isra.0+0x7e/0x2b0
[    4.738699]  local_pci_probe+0x4b/0x90
[    4.738706]  work_for_cpu_fn+0x1a/0x30
[    4.738710]  process_one_work+0x22b/0x3d0
[    4.738714]  worker_thread+0x223/0x420
[    4.738717]  ? process_one_work+0x3d0/0x3d0
[    4.738720]  kthread+0x12a/0x150
[    4.738724]  ? set_kthread_struct+0x50/0x50
[    4.738728]  ret_from_fork+0x22/0x30
[    4.738736]  </TASK>
[    4.738737] ================================================================================
[    4.738745] ================================================================================
[    4.738748] UBSAN: array-index-out-of-bounds in drivers/scsi/megaraid/megaraid_sas_fp.c:103:32
[    4.738753] index 1 is out of range for type 'MR_LD_SPAN_MAP [1]'
[    4.738756] CPU: 0 PID: 402 Comm: kworker/0:2 Not tainted 5.15.74-1-pve #1
[    4.738759] Hardware name: Epsylon Super Server/H11DSi-NT, BIOS 2.4 12/28/2021
[    4.738761] Workqueue: events work_for_cpu_fn
[    4.738765] Call Trace:
[    4.738766]  <TASK>
[    4.738767]  dump_stack_lvl+0x4a/0x63
[    4.738771]  dump_stack+0x10/0x16
[    4.738774]  ubsan_epilogue+0x9/0x49
[    4.738777]  __ubsan_handle_out_of_bounds.cold+0x44/0x49
[    4.738781]  ? mr_update_load_balance_params+0xbe/0xd0 [megaraid_sas]
[    4.738792]  MR_ValidateMapInfo+0xd7c/0xe50 [megaraid_sas]
[    4.738801]  ? __bpf_trace_tick_stop+0x20/0x20
[    4.738806]  ? wait_and_poll+0x5c/0xc0 [megaraid_sas]
[    4.738815]  ? megasas_issue_polled+0x5d/0x70 [megaraid_sas]
[    4.738826]  megasas_init_adapter_fusion+0xb11/0xc90 [megaraid_sas]
[    4.738836]  megasas_probe_one.cold+0xbfd/0x195d [megaraid_sas]
[    4.738846]  ? finish_task_switch.isra.0+0x7e/0x2b0
[    4.738851]  local_pci_probe+0x4b/0x90
[    4.738854]  work_for_cpu_fn+0x1a/0x30
[    4.738858]  process_one_work+0x22b/0x3d0
[    4.738862]  worker_thread+0x223/0x420
[    4.738864]  ? process_one_work+0x3d0/0x3d0
[    4.738867]  kthread+0x12a/0x150
[    4.738871]  ? set_kthread_struct+0x50/0x50
[    4.738875]  ret_from_fork+0x22/0x30
[    4.738880]  </TASK>
[    4.738881] ================================================================================
[    4.738890] ================================================================================
[    4.738894] UBSAN: array-index-out-of-bounds in drivers/scsi/megaraid/megaraid_sas_fp.c:103:32
[    4.738898] index 1 is out of range for type 'MR_LD_SPAN_MAP [1]'
[    4.738901] CPU: 0 PID: 402 Comm: kworker/0:2 Not tainted 5.15.74-1-pve #1
[    4.738903] Hardware name: Epsylon Super Server/H11DSi-NT, BIOS 2.4 12/28/2021
[    4.738904] Workqueue: events work_for_cpu_fn
[    4.738908] Call Trace:
[    4.738909]  <TASK>
[    4.738910]  dump_stack_lvl+0x4a/0x63
[    4.738914]  dump_stack+0x10/0x16
[    4.738917]  ubsan_epilogue+0x9/0x49
[    4.738920]  __ubsan_handle_out_of_bounds.cold+0x44/0x49
[    4.738924]  MR_LdRaidGet+0x3d/0x40 [megaraid_sas]
[    4.738934]  megasas_sync_map_info+0xd7/0x1a0 [megaraid_sas]
[    4.738945]  megasas_init_adapter_fusion+0xb24/0xc90 [megaraid_sas]
[    4.738955]  megasas_probe_one.cold+0xbfd/0x195d [megaraid_sas]
[    4.738965]  ? finish_task_switch.isra.0+0x7e/0x2b0
[    4.738970]  local_pci_probe+0x4b/0x90
[    4.738973]  work_for_cpu_fn+0x1a/0x30
[    4.738977]  process_one_work+0x22b/0x3d0
[    4.738980]  worker_thread+0x223/0x420
[    4.738983]  ? process_one_work+0x3d0/0x3d0
[    4.738985]  kthread+0x12a/0x150
[    4.738989]  ? set_kthread_struct+0x50/0x50
[    4.738993]  ret_from_fork+0x22/0x30
[    4.738999]  </TASK>
[    4.738999] ================================================================================
[    4.739003] ================================================================================
[    4.739006] UBSAN: array-index-out-of-bounds in drivers/scsi/megaraid/megaraid_sas_fp.c:140:9
[    4.739011] index 1 is out of range for type 'MR_LD_SPAN_MAP [1]'
[    4.739014] CPU: 0 PID: 402 Comm: kworker/0:2 Not tainted 5.15.74-1-pve #1
[    4.739016] Hardware name: Epsylon Super Server/H11DSi-NT, BIOS 2.4 12/28/2021
[    4.739018] Workqueue: events work_for_cpu_fn
[    4.739021] Call Trace:
[    4.739022]  <TASK>
[    4.739023]  dump_stack_lvl+0x4a/0x63
[    4.739027]  dump_stack+0x10/0x16
[    4.739030]  ubsan_epilogue+0x9/0x49
[    4.739033]  __ubsan_handle_out_of_bounds.cold+0x44/0x49
[    4.739037]  MR_GetLDTgtId+0x3e/0x40 [megaraid_sas]
[    4.739047]  megasas_sync_map_info+0xe5/0x1a0 [megaraid_sas]
[    4.739057]  megasas_init_adapter_fusion+0xb24/0xc90 [megaraid_sas]
[    4.739066]  megasas_probe_one.cold+0xbfd/0x195d [megaraid_sas]
[    4.739073]  ? finish_task_switch.isra.0+0x7e/0x2b0
[    4.739077]  local_pci_probe+0x4b/0x90
[    4.739079]  work_for_cpu_fn+0x1a/0x30
[    4.739082]  process_one_work+0x22b/0x3d0
[    4.739083]  worker_thread+0x223/0x420
[    4.739085]  ? process_one_work+0x3d0/0x3d0
[    4.739086]  kthread+0x12a/0x150
[    4.739089]  ? set_kthread_struct+0x50/0x50
[    4.739091]  ret_from_fork+0x22/0x30
[    4.739094]  </TASK>
[    4.739094] ================================================================================
[    4.739099] megaraid_sas 0000:43:00.0: NVME page size    : (4096)
[    4.739982] megaraid_sas 0000:43:00.0: megasas_enable_intr_fusion is called outbound_intr_mask:0x40000000
[    4.739984] megaraid_sas 0000:43:00.0: INIT adapter done
[    4.763139] ixgbe 0000:61:00.0: Multiqueue Enabled: Rx Queue count = 63, Tx Queue count = 63 XDP Queue count = 0
[    4.794258] megaraid_sas 0000:43:00.0: Snap dump wait time    : 25
[    4.794260] megaraid_sas 0000:43:00.0: pci id        : (0x1000)/(0x0017)/(0x1000)/(0x9440)
[    4.794262] megaraid_sas 0000:43:00.0: unevenspan support    : no
[    4.794263] megaraid_sas 0000:43:00.0: firmware crash dump    : no
[    4.794264] megaraid_sas 0000:43:00.0: JBOD sequence map    : enabled
[    4.794347] megaraid_sas 0000:43:00.0: Max firmware commands: 1516 shared with default hw_queues = 64 poll_queues 0

Code:
proxmox-ve: 7.3-1 (running kernel: 5.15.74-1-pve)
pve-manager: 7.3-3 (running version: 7.3-3/c3928077)
pve-kernel-5.15: 7.2-14
pve-kernel-helper: 7.2-14
pve-kernel-5.15.74-1-pve: 5.15.74-1
ceph-fuse: 15.2.17-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.2-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-8
libpve-guest-common-perl: 4.2-3
libpve-http-server-perl: 4.1-5
libpve-storage-perl: 7.2-12
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.0-3
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.2.7-1
proxmox-backup-file-restore: 2.2.7-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.3
pve-cluster: 7.3-1
pve-container: 4.4-2
pve-docs: 7.3-1
pve-edk2-firmware: 3.20220526-1
pve-firewall: 4.2-7
pve-firmware: 3.5-6
pve-ha-manager: 3.5.1
pve-i18n: 2.8-1
pve-qemu-kvm: 7.1.0-4
pve-xtermjs: 4.16.0-1
qemu-server: 7.3-1
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+2
vncterm: 1.7-1
zfsutils-linux: 2.1.6-pve1
 

Attachments

  • journalctl.log
    991.7 KB · Views: 0

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!