We've also had problems with VirtIO drivers (downloaded from Fedora site), network stops responding at least once in 24h.
"Disable/Enble" cycle would fix problems temporarily.
The final solution was to disable "Offload Tx LSO" in Device manager for VirtIO NICs.
Great :-)
In our system (Debian 5.0/Proxmox 1.9) read_ahead_kb parameter is set to 128 on both, LVM devices and iSCSI disks. And LVM stripe size is 64k
(I figured that there is not point in "optimizing" since workloads will vary greatly)
Our version of multipath-tools is 0.4.8.
The...
Hmm, I don't know how well Centos 5 works with this hardware.
We are using cheap 3COM^^HP switch here (v1910). cfq io scheduler for /dev/sd? and /dev/dm-? devices, cache=writethrough and noop io scheduler for guest.
What's your multipath configuration like?
devices {
device {...
Just a quick remark for LVM over iSCSI setup, as suggested by wiki article:
http://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Network_Backing
The problem is that Proxmox creates virtual disks with cache=none, but iSCSI works much better with cache=writethrough
This can easily be...
1) Could you check your network settings? MTU, flow control on switches etc.
2) Also, consider changing IO scheduler on guests (noop instead of cfq) and on hypervisor.
3) Check your cache settings for virtual disks, by default Proxmox ads cache=none to disks, but the default...
We are testing Dell storage (MD3620i) with dual active iSCSI controllers using 1Gbit switch
and dual port 1Gbit NICs on Proxmox hosts (no 10Gbit stuff yet)
Unfortunately, only one controller can access to one virtual disk on storage, so we did the following:
- Configured and tested...
Great guys!
One question though, how can we specify 'vhost=on' (i.e. VhostNet) for our virtual machines?
Edit: Answering my own question: module 'vhost_net' must be loaded :-)
Re: New Proxmox VE Kernels (2.6.18 - 2.6.24 - 2.6.32)
I can also confirm that everything works now.
pve-manager: 1.5-1 (pve-manager/1.5/4561)
running kernel: 2.6.32-1-pve
proxmox-ve-2.6.32: 1.5-2
pve-kernel-2.6.32-1-pve: 2.6.32-2
pve-kernel-2.6.24-10-pve: 2.6.24-21
qemu-server: 1.1-10...
I've read some docs about "iptables physdev match" module, and managed to get simple firewall working where we can do some firewalling without knowing anything about IP addressing inside KVM guests.
Hope someone will find this info useful :-)
Traffic flow:
Incomming traffic:
--> [eth0]...
Re: New Proxmox VE Kernels (2.6.18 - 2.6.24 - 2.6.32)
To shorten output, I've filtered kernel log with 'grep eth'
This is an IBM x3650 M2 with integrated 'Broadcom NetXtreme II BCM5709' and additional, Dual port Intel(R) PRO/1000 NIC.
2.6.24 kernel
Dec 29 09:45:16 kvm5 kernel: eth0...
Re: New Proxmox VE Kernels (2.6.18 - 2.6.24 - 2.6.32)
Dietmar, maybe this can help, initramfs output after installing 2.6.32 kernel
update-initramfs: Generating /boot/initrd.img-2.6.32-1-pve
W: Possible missing firmware /lib/firmware/bnx2/bnx2-rv2p-09ax-5.0.0.j3.fw for module bnx2
W...
Hi guys,
I was reading the discussion about Proxmox VE kernel with/without support for OpenVZ. As I understand, a large number of people really needs lightweight virtualization (which is fine). My main concern is that OpenVZ will never be included in standard kernel, so Proxmox will always...
For initial installation you need virtual floppy image, probably contained in RedHat's RPM: virtio-win-1.0.0-2.31383.el5.noarch.rpm
(currently, it's not available on ftp.redhat.com)
Other option would be to clone the source...
Maybe it's a bug in KVM-86?
http://www.mail-archive.com/kvm@vger.kernel.org/msg16449.html
It seems there are issues with 64-bit guests. Can someone from proxmox team confirm that?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.