test repository updates (kvm 0.14.0)

dietmar

Proxmox Staff Member
Staff member
Apr 28, 2005
17,261
661
213
Austria
www.proxmox.com
Hi All,

I just uploaded two new packages to the 'pvetest' repository. Everybody is invited to test ;-)

pve-qemu-kvm (0.14.0-1)

* update to 0.14.0

* add fix for eventfd support (msix-eventfd-fix.patch)

* removed kvmtrace (removed from upstream?)

* add vnc keyboard fixes for fr-ca (reported by Pierre-Yves)

pve-kernel-2.6.35 (2.6.35-10)

* update to Ubuntu-2.6.35-27.48
 
In my sources.list I've:
deb http://mi.mirror.garr.it/mirrors/debian/ lenny main contrib
deb http://download.proxmox.com/debian lenny pve
deb http://download.proxmox.com/debian lenny pvetest
deb http://security.debian.org/ lenny/updates main contrib

I issue an aptitude update but neverless I have:
# apt-cache policy pve-qemu-kvm
pve-qemu-kvm:
Installed: 0.13.0-3
Candidate: 0.13.0-3
Version table:
*** 0.13.0-3 0
500 http://download.proxmox.com lenny/pve Packages
100 /var/lib/dpkg/status
0.13.0-2 0
500 http://download.proxmox.com lenny/pve Packages
500 http://download.proxmox.com lenny/pvetest Packages
0.12.5-2 0
500 http://download.proxmox.com lenny/pvetest Packages
proxmox:~#

What am I doing wrong? Or are intended for wget and dpkg -i ? (is what I'm doing right now to test kvm 0.14)
 
Last edited:
I can't test again right now, but yesterday I also tried it (removed pve, aptitude update, apt-cache policy) but did not helped.
In any case I've installed the .deb and rebooted proxmox. My kvm guest are working fine so far, but I've (fortunately) only GNU/Linux at home.
 
Hi,

updated our Testserver and worked without any problems. Only the USB Anywhere USBtoIP Server didn´t work :(

Regards, Valle
 
Hi,
update with pvetest only works well.
But i had one issue.
First (not realy an issue): a new kernel with the same name was installed (was disscussed in a thread before). At my two test machines i have a dolphin-nic, which need some kernelmodules. Due to the same name i must overwrite (or save before) the existing modules: (/opt/DIS/lib/modules/2.6.35-1-pve).

Now the issue - after starting both nodes and sync the drbd-ressourcen via dolphin and start one VM i got an error (syncing in progress):
Code:
block drbd0: error receiving RSDataReply, l: 32792!
BUG: soft lockup - CPU#3 stuck for 61s! [mt_dis_dx/3:4987]
Modules linked in: sha1_generic drbd lru_cache dis_sisci dis_ssocks dis_msq dis_mbox vhost_net kvm_amd kvm dis_irm dis_dx msr ib_iser rdma_cm ib_cm iw_cm ib_sa ib_mad ib_core ib_addr iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi dummy0 bridge 8021q garp stp snd_hda_codec_atihdmi tpm_tis psmouse serio_raw snd_hda_codec_via tpm tpm_bios edac_core edac_mce_amd k10temp pcspkr asus_atk0110 i2c_piix4 snd_hda_intel snd_hda_codec snd_hwdep snd_pcm snd_timer snd soundcore snd_page_alloc shpchp ohci1394 ieee1394 usbhid hid atl1e pata_atiixp firewire_ohci firewire_core crc_itu_t arcmsr e1000e ahci libahci floppy [last unloaded: scsi_wait_scan]
CPU 3 
Modules linked in: sha1_generic drbd lru_cache dis_sisci dis_ssocks dis_msq dis_mbox vhost_net kvm_amd kvm dis_irm dis_dx msr ib_iser rdma_cm ib_cm iw_cm ib_sa ib_mad ib_core ib_addr iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi dummy0 bridge 8021q garp stp snd_hda_codec_atihdmi tpm_tis psmouse serio_raw snd_hda_codec_via tpm tpm_bios edac_core edac_mce_amd k10temp pcspkr asus_atk0110 i2c_piix4 snd_hda_intel snd_hda_codec snd_hwdep snd_pcm snd_timer snd soundcore snd_page_alloc shpchp ohci1394 ieee1394 usbhid hid atl1e pata_atiixp firewire_ohci firewire_core crc_itu_t arcmsr e1000e ahci libahci floppy [last unloaded: scsi_wait_scan]

Pid: 4987, comm: mt_dis_dx/3 Not tainted 2.6.35-1-pve #1 M4A78T-E/System Product Name
RIP: 0010:[<ffffffffa028747a>]  [<ffffffffa028747a>] dx_pkt_handler+0x11a/0x3c0 [dis_dx]
RSP: 0018:ffff880230cd3d30  EFLAGS: 00000287
RAX: 0000000000002802 RBX: ffff880230cd3dd0 RCX: 0000000000000000
RDX: 00000000fffffffe RSI: 0000000000000000 RDI: 0000000083c00204
RBP: ffffffff8100a6ce R08: ffff880005b40000 R09: 0000000000000000
R10: 0000000100000003 R11: 0000000000000001 R12: ffffffffa0305ce0
R13: ffffffffa0305d40 R14: 0000000000000000 R15: 0000000200000002
FS:  00007f3d34c2f6e0(0000) GS:ffff880001ec0000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: 00007fadeb836000 CR3: 0000000001a2a000 CR4: 00000000000006e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process mt_dis_dx/3 (pid: 4987, threadinfo ffff880230cd2000, task ffff88023403dbc0)
Stack:
 ffff880230cd3d70 000000008104b07e 0000000000000000 ffff88022fc32078
<0> ffff880230cd3de0 ffff880230cd3d68 ffffffff8102e0e7 ffff880230cd3d90
<0> ffff880202000000 0000000130cd3d88 83c0020400000000 ffff880200000000
Call Trace:
 [<ffffffff8102e0e7>] ? default_spin_lock_flags+0x9/0xe
 [<ffffffff814b40c3>] ? _raw_spin_lock_irqsave+0x27/0x31
 [<ffffffff8106ef0f>] ? down+0x38/0x3d
 [<ffffffffa02856d7>] ? dx_deferred_isr+0x357/0x470 [dis_dx]
 [<ffffffffa02915af>] ? workqueue_dispatch+0x13/0x15 [dis_dx]
 [<ffffffff8106706b>] ? worker_thread+0x1a9/0x24d
 [<ffffffff814b294b>] ? schedule+0x59d/0x602
 [<ffffffffa029159c>] ? workqueue_dispatch+0x0/0x15 [dis_dx]
 [<ffffffff8106ada0>] ? autoremove_wake_function+0x0/0x3d
 [<ffffffff81066ec2>] ? worker_thread+0x0/0x24d
 [<ffffffff8106a8b8>] ? kthread+0x82/0x8a
 [<ffffffff8100ab24>] ? kernel_thread_helper+0x4/0x10
 [<ffffffff8106a836>] ? kthread+0x0/0x8a
 [<ffffffff8100ab20>] ? kernel_thread_helper+0x0/0x10
Code: ff ff ff eb 16 0f 1f 40 00 8d 1c 1a 81 e3 ff 3f 00 00 41 39 de 0f 84 ae 01 00 00 4d 8b 85 78 20 00 00 89 d8 41 8b 0c 80 8d 43 01 <25> ff 3f 00 00 89 c2 8d 58 01 89 4d b0 41 8b 3c 90 81 e3 ff 3f 
Call Trace:
 [<ffffffff8102e0e7>] ? default_spin_lock_flags+0x9/0xe
 [<ffffffff814b40c3>] ? _raw_spin_lock_irqsave+0x27/0x31
 [<ffffffff8106ef0f>] ? down+0x38/0x3d
 [<ffffffffa02856d7>] ? dx_deferred_isr+0x357/0x470 [dis_dx]
 [<ffffffffa02915af>] ? workqueue_dispatch+0x13/0x15 [dis_dx]
 [<ffffffff8106706b>] ? worker_thread+0x1a9/0x24d
 [<ffffffff814b294b>] ? schedule+0x59d/0x602
 [<ffffffffa029159c>] ? workqueue_dispatch+0x0/0x15 [dis_dx]
 [<ffffffff8106ada0>] ? autoremove_wake_function+0x0/0x3d
 [<ffffffff81066ec2>] ? worker_thread+0x0/0x24d
 [<ffffffff8106a8b8>] ? kthread+0x82/0x8a
 [<ffffffff8100ab24>] ? kernel_thread_helper+0x4/0x10
 [<ffffffff8106a836>] ? kthread+0x0/0x8a
 [<ffffffff8100ab20>] ? kernel_thread_helper+0x0/0x10
proxpm-b:~# 
Message from syslogd@proxpm-b at Mar  2 15:46:41 ...
 kernel:Stack:

Message from syslogd@proxpm-b at Mar  2 15:46:41 ...
 kernel:Call Trace:

Message from syslogd@proxpm-b at Mar  2 15:46:41 ...
 kernel:Code: 20 00 00 0f 84 0d 02 00 00 41 0f b6 cf 89 8d 6c ff ff ff eb 16 0f 1f 40 00 8d 1c 1a 81 e3 ff 3f 00 00 41 39 de 0f 84 ae 01 00 00 <4d> 8b 85 78 20 00 00 89 d8 41 8b 0c 80 8d 43 01 25 ff 3f 00 00 

proxpm-b:~# 
Message from syslogd@proxpm-b at Mar  2 15:47:46 ...
 kernel:Stack:

Message from syslogd@proxpm-b at Mar  2 15:47:46 ...
 kernel:Call Trace:

Message from syslogd@proxpm-b at Mar  2 15:47:46 ...
 kernel:Code: ff ff ff eb 16 0f 1f 40 00 8d 1c 1a 81 e3 ff 3f 00 00 41 39 de 0f 84 ae 01 00 00 4d 8b 85 78 20 00 00 89 d8 41 8b 0c 80 8d 43 01 <25> ff 3f 00 00 89 c2 8d 58 01 89 4d b0 41 8b 3c 90 81 e3 ff 3f
The machine with was only syncing - the running vm was on the other node.

The error came from the dolphin module, so perhaps i should ask them... but with the 2.6.35 kernel from pve repository all works.

Udo
 
...
The error came from the dolphin module, so perhaps i should ask them...
Hi,
there is a new driver for the dolphin-nic available - after somer trouble to get them installed all runs now.
But it's a little bit to early to say something concluding.

I will do some stresstest during the weekend.

Udo
 
I performed an upgrade on two Proxmox servers that hosts both Linux and Windows 2008 R2 guests. All guests use VIRTIO for both drives and network cards. What I noticed is that after some time, the Windows guests' network virtio becomes unresponsive. The Windows hosts therefore become unreachable from the network.

As a workaround, I replaced the network virtio with E1000 and its been working so far. Does this require new Windows virtio drivers? Or some other settings?

Any help would be appreciated.

Thanks
 
Just for record, are you using latest Win virtio drivers from Fedora (1.1.16)?
Btw, the problem of Windows "frozen" network interface happens to me too time to time, at least in XP, and with older kvm. The interface is up from the guest point of view, but does not work until I "restore" it (right click on the interface, I've the italian version of XP, don't know the exact translation). Never happened to you before upgrade?
 
Hi!

Yes, it never happened before the upgrade, and I am using the 1.1.16 virtio drivers.

However, it seems that new packages were put up in the pvetest repository (since the initial announcement above) and I updated the hosts. I have replaced some Windows guests' nic card (back from e1000 to virtio) and so far, there has been no disconnection. I will post the status in a week ... I am hoping that this has been resolved.

Thanks all!
 
unfortunately, after a few hours, they got disconnected again ... no pertinent errors inside the windows guests as well ... am stumped ....

at least, a work-around exists (use e1000 drivers), but personally, i prefer the virtio drivers ...
 
The problem with 2008 R2 networking and virtio driver is present for a while now, i think since v1.7, at last for me.
No problem with Linux VM.
 
Hi,

updated our Testserver and worked without any problems. Only the USB Anywhere USBtoIP Server didn´t work :(

Regards, Valle
Hi Valle,
i just do an test with an winxp-vm and connected usb-anywhere on a machine with the actual pvetest:
pve-manager: 1.7-12 (pve-manager/1.7/5490)
running kernel: 2.6.35-1-pve
proxmox-ve-2.6.35: 1.7-10
pve-kernel-2.6.35-1-pve: 2.6.35-10
qemu-server: 1.1-29
pve-firmware: 1.0-10
libpve-storage-perl: 1.0-16
vncterm: 0.9-2
vzctl: 3.0.24-1pve4
vzdump: 1.2-11
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.14.0-2
ksm-control-daemon: 1.0-5

The USB-Dongle-Access work without problems. Version of Anywhere: 3.10.30; Roothub: 1.5.0. The box was something with "/5".

Udo
 
May I ask if you guys are planning to lift this to a stable release on the 1.x baseline (maybe a 1.7.1 release or so)? Thx!
 
hopefully in the coming week, all packages are already in pvetest.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!