Hi Guys,
can you post your vmid.conf ? guest os / guest kernel version ? & also proxmox host kernel version.
can you post your vmid.conf ? guest os / guest kernel version ? & also proxmox host kernel version.
Hi Guys,
can you post your vmid.conf ? guest os / guest kernel version ? & also proxmox host kernel version.
Guest OS Kernel
Linux proxcpt2 2.6.18-406.el5PAE #1 SMP Tue Jun 2 18:06:34 EDT 2015 i686 i686 i386 GNU/Linux
pveversion -v
proxmox-ve-2.6.32: 3.4-156 (running kernel: 2.6.32-39-pve)
pve-manager: 3.4-6 (running version: 3.4-6/102d4547)
pve-kernel-2.6.32-32-pve: 2.6.32-136
pve-kernel-2.6.32-39-pve: 2.6.32-156
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-2
pve-cluster: 3.0-17
qemu-server: 3.4-6
pve-firmware: 1.1-4
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-33
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-10
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1
cat /etc/pve/qemu-server/*.conf
cat 101.conf
boot: cdn
bootdisk: virtio0
cores: 2
ide2: local:iso/smeserver-8.1-i386.iso,media=cdrom
memory: 3072
name: sme8-.182
net0: virtio=12:53:17:38:98:B6,bridge=vmbr0
net1: rtl8139=7E:8D:25:BE:84:1F,bridge=vmbr1
onboot: 1
ostype: l26
smbios1: uuid=252969b4-92d3-40b2-9d39-e12b2d94d190
sockets: 1
virtio0: local:101/vm-101-disk-1.qcow2,format=qcow2,size=50G
cat 102.conf
boot: cdn
bootdisk: virtio0
cores: 2
ide2: local:iso/smeserver-8.1-i386.iso,media=cdrom
memory: 5120
name: sme8.184
net0: virtio=4E:28:58:73:3F:1A,bridge=vmbr0
net1: rtl8139=E6:07:87:F2:E1:79,bridge=vmbr1
onboot: 1
ostype: l26
smbios1: uuid=fda617d2-6fae-4bb2-9b38-4093ebf0eed2
sockets: 1
virtio0: local:102/vm-102-disk-1.qcow2,format=qcow2,size=50G
2.6.18-406.el5PAE
Hi Guys,
can you post your vmid.conf ? guest os / guest kernel version ? & also proxmox host kernel version.
BTW I switched all VMs to IDE yesterday
Change all ours VM to raw format will be a hard task. Furthermore, I don't know if it's possible in our architecture.
Hi
Just had VM lockup for same reason. VM is Debian 7 and disk is qcow2 format.
Can anybody confirm: does changing disk format to raw fixes this problem?
Are you still on PVE 3.x, or 4.x?I'm well aware that this is a fairly old thread, but as I ran into the same problems while evaluating Proxmox, I think this is the best place to publish my solution ... if it's not common knowledge and best practice yet, but anyway:
Using the Phoronix / OpenBenchmarking test-suite pts/fio, I saw exactly the same error messages and wait states.
After applying a disk throttle of 200 MB for both read and write the problem disappeared.
There are still occasional error messages:
Nov 19 14:26:42 centos-a1 kernel: ata2: lost interrupt (Status 0x58)
Nov 19 14:26:42 centos-a1 kernel: ata2.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen
Nov 19 14:26:42 centos-a1 kernel: ata2.00: cmd a0/00:00:00:08:00/00:00:00:00:00/a0 tag 0 pio 16392 in#012 Get event status notification 4a 01 00 00 10 00 00 00 08 00res 40/00:02:00:08:00/00:00:00:00:00/a0 Emask 0x4 (timeout)
Nov 19 14:26:42 centos-a1 kernel: ata2.00: status: { DRDY }
Nov 19 14:26:42 centos-a1 kernel: ata2: soft resetting link
Nov 19 14:26:43 centos-a1 kernel: ata2.00: configured for MWDMA2
Nov 19 14:26:43 centos-a1 kernel: ata2: EH complete
In the statistics I see still peaks above 1 GB/s which are related to this error message, but I'm optimistic that I can get rid of them playing with the disk throttle.
Attached please find a screenshot of the I/O diagram. The first half is before throttling the throughput.
Cheers,
Thomas