As I mentioned in my previous post IDE bus for hard disk works for me, although I had VirtIO SCSI as controller.
I'm currently running with the same workaround: IDE bus for the hard disk and VirtIO SCSI as controller.
As I mentioned in my previous post IDE bus for hard disk works for me, although I had VirtIO SCSI as controller.
I'm currently running with the same workaround: IDE bus for the hard disk and VirtIO SCSI as controller.
which commit ?My git bisect revealed a commit related to virtio. SCSI controller type VirtIO SCSI and VirtIO SCSI Single give me the possibility to crash my VM's.
I didn't want to hijack this topic for my own issue Please see this post for the answer on your question and additional details: https://forum.proxmox.com/threads/vm-crash-with-memory-hotplug.35904/#post-181622
#cat /sys/kernel/mm/transparent_hugepage/enabled
always [madvise] never
#cat /sys/kernel/mm/transparent_hugepage/defrag
always defer [madvise] never
echo madvise > /sys/kernel/mm/transparent_hugepage/enabled
echo madvise > /sys/kernel/mm/transparent_hugepage/defrag
Hi aderumier,
both options was on "madvise" on my system as well (whatever this means):
Code:#cat /sys/kernel/mm/transparent_hugepage/enabled always [madvise] never #cat /sys/kernel/mm/transparent_hugepage/defrag always defer [madvise] never
I stopped my machine, set both options to never and changed by disks to scsi again (was IDE before). But the windows guest fails with "no boot device" after already presenting the spinning wheel on blue background and booting a while.
I even restored the VM to an earlier state, but I can't get it to boot anymore. It fails with a blue screen collecting a memory dump and reboots again.
Luckily I did this on my test system, so the production system is still on IDE and it is running fine since the switch.
I can't tell if switching my disks from scsi to ide and back again did any bad. But for me it is not working right now.
Do you mean the transparent_hugepage setting is unrelated to the issue in general, or to the problem with my vm that doesn't boot anymore?it's 100% unrelated. Note that if you change disk from ide->scsi, scsi->ide, you need to change boot drive each in vm option.
I mean, transparent huge can't impact boot. (maybe windows don't like switch between ide-> scsi, I really don't known).Do you mean the transparent_hugepage setting is unrelated to the issue in general, or to the problem with my vm that doesn't boot anymore?
I can't refuse the second assumption, I had not the time yet to investigate any further.
I have no I/O issues with LXC. I have a mail server running on the same host and it performs perfectly fine and does not drop or delay packets even under load.I have a question - is anyone seeing i/o issues when using LXC?
thank you, we will change from kvm to lxc for key systems.I have no I/O issues with LXC. I have a mail server running on the same host and it performs perfectly fine and does not drop or delay packets even under load.
I mean, transparent huge can't impact boot. (maybe windows don't like switch between ide-> scsi, I really don't known).
Transparent hugepage could impact performance only.
BTW, I have build last pve-qemu-kvm with patch for@hansm bug. (which is virtio related, so maybe it could improve performance too)
http://odisoweb1.odiso.net/pve-qemu-kvm_2.9.1-1_amd64.deb
pve-qemu-kvm (2.9.1-1)
I just upgraded pve node and this was installed. then i restarted all kvms
Code:pve-qemu-kvm (2.9.1-1)
have you tested pve-qemu-kvm (2.9.1-1) ?
In fact interesting. One difference I see now is that you are using "VirtIO SCSI single" and I haven't even tried that. I was using simply "VirtIO SCSI".What makes my setup any different from the people that are having issues?
virtio-scsi && virtio-scsi-single controller are for scsi disks.All my VMs (~25) are configured with virtio disks using the 'virtio scsi single' controller type
In fact interesting. One difference I see now is that you are using "VirtIO SCSI single" and I haven't even tried that. I was using simply "VirtIO SCSI".
Can you tell the difference? Is it one single port then?