Problems while copying files from RAID0 to RAID5

szafran

Renowned Member
Aug 31, 2012
45
0
71
Hi,

I'm having problems with RAID0 to RAID5 file copy (both mdadm - setup under Proxmox). Haven't tried if the same goes the other way yet. Some info:
RAID0 - PVE + LVM VMs - 3x 500GB
RAID5 - LVMs for other DATA - 13x 2TB

Guest OS is Ubuntu 12.04.1.
OSs partiton is a LVM on RAID0 - filesystem created by the installer - ext4.
DATA LVM partitions (16TB and 6TB) on RAID5 - filesystem created by me using:
Code:
mkfs -O 64bit,extent,has_journal,uninit_bg,sparse_super,dir_index,large_file,flex_bg -t ext4 -T huge -b 4096 -v -m 0 -E stride=128,stripe_width=1536 -L LABEL /dev/DEVICE

The thing is that if I'm copying something from RAID0 to RAID5 then it can go all day long (eg. using MC). But if I open a second terminal and try to copy something else (or any other background process starts copying) then all writes to RAID5 hang, and from that moment I'm unable to write anything to it. Sometimes rebooting guest help, but most of the times I have to reboot the physical machine to be able to write to RAID5 again (reading is possible).

I'm mounting it using:
Code:
UUID=<uuid here> /home/szafran/Magazyn ext4 user,noatime,errors=continue 0 0

Does anyone have any idea what is going on? And/Or how to fix it?
 
more info:
Code:
Sep 17 22:05:42 NAS kernel: [11880.996109] INFO: task mc:2061 blocked for more than 120 seconds.Sep 17 22:05:42 NAS kernel: [11880.996113] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Sep 17 22:05:42 NAS kernel: [11880.996114] mc              D ffffffff81806200     0  2061   1994 0x00000000
Sep 17 22:05:42 NAS kernel: [11880.996118]  ffff88025ef8b918 0000000000000086 ffff88025ef8b8f8 ffffffff81241c0b
Sep 17 22:05:42 NAS kernel: [11880.996121]  ffff88025ef8bfd8 ffff88025ef8bfd8 ffff88025ef8bfd8 0000000000013780
Sep 17 22:05:42 NAS kernel: [11880.996123]  ffff88028e559700 ffff880265fb2e00 ffff88025ef8b928 ffff88028a1f9800
Sep 17 22:05:42 NAS kernel: [11880.996125] Call Trace:
Sep 17 22:05:42 NAS kernel: [11880.996132]  [<ffffffff81241c0b>] ? __ext4_handle_dirty_metadata+0x8b/0x130
Sep 17 22:05:42 NAS kernel: [11880.996136]  [<ffffffff816579cf>] schedule+0x3f/0x60
Sep 17 22:05:42 NAS kernel: [11880.996139]  [<ffffffff8125dd0a>] start_this_handle.isra.9+0x2aa/0x3e0
Sep 17 22:05:42 NAS kernel: [11880.996142]  [<ffffffff8108aa50>] ? add_wait_queue+0x60/0x60
Sep 17 22:05:42 NAS kernel: [11880.996144]  [<ffffffff8125df0a>] jbd2__journal_start+0xca/0x110
Sep 17 22:05:42 NAS kernel: [11880.996146]  [<ffffffff8125df63>] jbd2_journal_start+0x13/0x20
Sep 17 22:05:42 NAS kernel: [11880.996148]  [<ffffffff812358cf>] ext4_journal_start_sb+0x7f/0x1d0
Sep 17 22:05:42 NAS kernel: [11880.996150]  [<ffffffff8121782f>] ? ext4_da_write_begin+0x7f/0x210
Sep 17 22:05:42 NAS kernel: [11880.996152]  [<ffffffff8121782f>] ext4_da_write_begin+0x7f/0x210
Sep 17 22:05:42 NAS kernel: [11880.996155]  [<ffffffff8118a930>] ? poll_freewait+0xe0/0xe0
Sep 17 22:05:42 NAS kernel: [11880.996158]  [<ffffffff813172ad>] ? copy_user_generic_string+0x2d/0x40
Sep 17 22:05:42 NAS kernel: [11880.996161]  [<ffffffff8111709a>] generic_perform_write+0xca/0x210
Sep 17 22:05:42 NAS kernel: [11880.996164]  [<ffffffff8111723d>] generic_file_buffered_write+0x5d/0x90
Sep 17 22:05:42 NAS kernel: [11880.996166]  [<ffffffff81118ce9>] __generic_file_aio_write+0x229/0x440
Sep 17 22:05:42 NAS kernel: [11880.996168]  [<ffffffff81118f72>] generic_file_aio_write+0x72/0xe0
Sep 17 22:05:42 NAS kernel: [11880.996170]  [<ffffffff81210c2f>] ext4_file_write+0xbf/0x260
Sep 17 22:05:42 NAS kernel: [11880.996172]  [<ffffffff8118b702>] ? core_sys_select+0x232/0x370
Sep 17 22:05:42 NAS kernel: [11880.996174]  [<ffffffff8117723a>] do_sync_write+0xda/0x120
Sep 17 22:05:42 NAS kernel: [11880.996177]  [<ffffffff812d7588>] ? apparmor_file_permission+0x18/0x20
Sep 17 22:05:42 NAS kernel: [11880.996179]  [<ffffffff8129cd1c>] ? security_file_permission+0x2c/0xb0
Sep 17 22:05:42 NAS kernel: [11880.996181]  [<ffffffff811777e1>] ? rw_verify_area+0x61/0xf0
Sep 17 22:05:42 NAS kernel: [11880.996185]  [<ffffffff81177b43>] vfs_write+0xb3/0x180
Sep 17 22:05:42 NAS kernel: [11880.996187]  [<ffffffff81177e6a>] sys_write+0x4a/0x90
Sep 17 22:05:42 NAS kernel: [11880.996189]  [<ffffffff81661ec2>] system_call_fastpath+0x16/0x1b
Sep 17 22:05:42 NAS kernel: [11880.996191] INFO: task jbd2/vdc1-8:2305 blocked for more than 120 seconds.
Sep 17 22:05:42 NAS kernel: [11880.996192] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Sep 17 22:05:42 NAS kernel: [11880.996194] jbd2/vdc1-8     D ffffffff81806200     0  2305      2 0x00000000
Sep 17 22:05:42 NAS kernel: [11880.996196]  ffff88028ba95ce0 0000000000000046 0000000000000000 0000000000000000
Sep 17 22:05:42 NAS kernel: [11880.996199]  ffff88028ba95fd8 ffff88028ba95fd8 ffff88028ba95fd8 0000000000013780
Sep 17 22:05:42 NAS kernel: [11880.996201]  ffff88028e5a9700 ffff880265df8000 ffff88028ba95cf0 ffff88028ba95df8
Sep 17 22:05:42 NAS kernel: [11880.996203] Call Trace:
Sep 17 22:05:42 NAS kernel: [11880.996205]  [<ffffffff816579cf>] schedule+0x3f/0x60
Sep 17 22:05:42 NAS kernel: [11880.996207]  [<ffffffff8126057a>] jbd2_journal_commit_transaction+0x18a/0x1240
Sep 17 22:05:42 NAS kernel: [11880.996210]  [<ffffffff81659b6e>] ? _raw_spin_lock_irqsave+0x2e/0x40
Sep 17 22:05:42 NAS kernel: [11880.996212]  [<ffffffff81076d18>] ? lock_timer_base.isra.29+0x38/0x70
Sep 17 22:05:42 NAS kernel: [11880.996214]  [<ffffffff8108aa50>] ? add_wait_queue+0x60/0x60
Sep 17 22:05:42 NAS kernel: [11880.996220]  [<ffffffff812652fb>] kjournald2+0xbb/0x220
Sep 17 22:05:42 NAS kernel: [11880.996222]  [<ffffffff8108aa50>] ? add_wait_queue+0x60/0x60
Sep 17 22:05:42 NAS kernel: [11880.996224]  [<ffffffff81265240>] ? commit_timeout+0x10/0x10
Sep 17 22:05:42 NAS kernel: [11880.996226]  [<ffffffff81089fbc>] kthread+0x8c/0xa0
Sep 17 22:05:42 NAS kernel: [11880.996228]  [<ffffffff81664034>] kernel_thread_helper+0x4/0x10
Sep 17 22:05:42 NAS kernel: [11880.996230]  [<ffffffff81089f30>] ? flush_kthread_worker+0xa0/0xa0
Sep 17 22:05:42 NAS kernel: [11880.996231]  [<ffffffff81664030>] ? gs_change+0x13/0x13
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!