Multipath and QEMU: /dev/sdc: read failed

Kaya

Member
Jun 20, 2012
111
2
18
Again with IMS, dual controller e multipath!
I'm not sure if it's proxmox related but I try to ask.

It seems that QEMU, or some script that call him, try to directly access to disk.
When i run a command like "qemu stop 100", I see:
Code:
[COLOR=#000000][FONT=tahoma] /dev/sdc: read failed after 0 of 4096 at 0: Input/output error[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]  /dev/sdc: read failed after 0 of 4096 at 268435390464: Input/output error[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]  /dev/sdc: read failed after 0 of 4096 at 268435447808: Input/output error[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]  /dev/sdc: read failed after 0 of 4096 at 4096: Input/output error[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]  /dev/sdd: read failed after 0 of 512 at 0: Input/output error[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]  /dev/sdd: read failed after 0 of 512 at 598997532672: Input/output error[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]  /dev/sdd: read failed after 0 of 512 at 598997602304: Input/output error[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]  /dev/sdd: read failed after 0 of 512 at 4096: Input/output error[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]TASK OK[/FONT][/COLOR]

With strace i SEE:
Code:
ioctl(6, SNDCTL_TMR_TIMEBASE or TCGETS, 0x7fff49976b10) = -1 EINVAL (Invalid argument)lseek(6, 0, SEEK_CUR)                   = -1 ESPIPE (Illegal seek)
fstat(6, {st_mode=S_IFIFO|0600, st_size=0, ...}) = 0
fcntl(6, F_SETFD, FD_CLOEXEC)           = 0
select(8, [6], NULL, NULL, {1, 0})      = 0 (Timeout)
open("/proc/18827/stat", O_RDONLY)      = 10
ioctl(10, SNDCTL_TMR_TIMEBASE or TCGETS, 0x7fff49976b10) = -1 ENOTTY (Inappropriate ioctl for device)
lseek(10, 0, SEEK_CUR)                  = 0
fstat(10, {st_mode=S_IFREG|0444, st_size=0, ...}) = 0
fcntl(10, F_SETFD, FD_CLOEXEC)          = 0
read(10, "18827 (qm) S 18824 18827 18827 0"..., 4096) = 255
close(10)                               = 0
select(8, [6], NULL, NULL, {1, 0})      = 1 (in [6], left {0, 982618})
read(6, "  /dev/sdc: read failed after 0 "..., 4096) = 499
write(1, "  /dev/sdc: read failed after 0 "..., 65) = 65
write(7, "  /dev/sdc: read failed after 0 "..., 65) = 65
write(1, "  /dev/sdc: read failed after 0 "..., 76) = 76
write(7, "  /dev/sdc: read failed after 0 "..., 76) = 76
write(1, "  /dev/sdc: read failed after 0 "..., 76) = 76
write(7, "  /dev/sdc: read failed after 0 "..., 76) = 76
write(1, "  /dev/sdc: read failed after 0 "..., 68) = 68
write(7, "  /dev/sdc: read failed after 0 "..., 68) = 68
write(1, "  /dev/sdd: read failed after 0 "..., 64) = 64
write(7, "  /dev/sdd: read failed after 0 "..., 64) = 64
write(1, "  /dev/sdd: read failed after 0 "..., 75) = 75
write(7, "  /dev/sdd: read failed after 0 "..., 75) = 75
write(1, "  /dev/sdd: read failed after 0 "..., 75) = 75
write(7, "  /dev/sdd: read failed after 0 "..., 75) = 75
select(8, [6], NULL, NULL, {1, 0})      = 1 (in [6], left {0, 987509})
read(6, "  /dev/sdd: read failed after 0 "..., 4096) = 67
write(1, "  /dev/sdd: read failed after 0 "..., 67) = 67
write(7, "  /dev/sdd: read failed after 0 "..., 67) = 67
select(8, [6], NULL, NULL, {1, 0})      = 1 (in [6], left {0, 999998})
read(6, "TASK OK\n", 4096)              = 8
write(7, "TASK OK\n", 8)                = 8
select(8, [6], NULL, NULL, {1, 0})      = 1 (in [6], left {0, 999324})
--- SIGCHLD (Child exited) @ 0 (0) ---

Device sdc and sdd are devices sda and sdb seen by the second storage controller, according to multipath and qemu have to take no care about that devices.
Even if VM's work, I'm not sure about what's happening on the system and I don't know if I have to take care about that error and/or post it on the QEMU forum/ML.

Any idea/hints?
Thanks in advice
 
Could be a configuration error on multipath ?
Various SAN devices uses different configuration...
Can I see multipath config, "multipath -ll" output ?

I had similar problems in the past because the bios of the SAN was configured incorrectly, especially if the SAN requires RDAC.
There is an old my post which explain the proper BIOS setting.
The symptom is the constant hopping from channel A and B because A fault and convert to to B, B fault and convert to A.

Luca
 
Could be a configuration error on multipath ?
Various SAN devices uses different configuration...
Can I see multipath config, "multipath -ll" output ?

I had similar problems in the past because the bios of the SAN was configured incorrectly, especially if the SAN requires RDAC.
There is an old my post which explain the proper BIOS setting.
The symptom is the constant hopping from channel A and B because A fault and convert to to B, B fault and convert to A.

Luca

Wow that was quick ;-)

I have followed the wiki articale about Intel Modular Server with Multipath (http://pve.proxmox.com/wiki/Intel_Modular_Server#Optional:_Configure_Multi-Path).

Code:
root@timo:~# multipath -llvmdisks (2225d0001557dd1ea) dm-1 Intel,Multi-Flex
size=1000G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=130 status=active
| `- 0:0:0:10 sdb 8:16 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  `- 0:0:1:10 sdd 8:48 active ready running
system (22275000155ebddcd) dm-0 Intel,Multi-Flex
size=300G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=130 status=active
| `- 0:0:0:0  sda 8:0  active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  `- 0:0:1:0  sdc 8:32 active ready running
 
It's OK, I did not realize you used a storage intel.
I use different IBM SAN and... require a different configuration.

There is a setting on SAN BIOS ?
BIOS on fibre channel cards is update ?

Luca
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!