Proxmox VE 5.0 beta1 released!

hawk128

New Member
May 22, 2017
11
0
1
36
Hi everybody,
I have installed Proxmox 5.0 on to several servers via installing Debian 9 first. And on all servers the Network Traffic graphs for nodes itself (not guests) do not work. Could you check and confirm it? Or it is probably only my issue?
 

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
3,399
528
113
Hi everybody,
I have installed Proxmox 5.0 on to several servers via installing Debian 9 first. And on all servers the Network Traffic graphs for nodes itself (not guests) do not work. Could you check and confirm it? Or it is probably only my issue?
thanks for the pointer - seems like pvestatd is not handling enoX devices.
 

camouflageX

New Member
May 22, 2017
2
0
1
36
Hello, we are currently testing Proxmox VE 5.0 Beta 1 and I am getting these message almost every second when I run "ceph -w":
2017-05-22 15:21:14.184174 mon.0 [INF] mgrmap e224786: active: 0 standbys: 1, 2
2017-05-22 15:21:15.186787 mon.0 [INF] mgrmap e224787: active: 0 standbys: 1, 2
2017-05-22 15:21:16.195813 mon.0 [INF] mgrmap e224788: active: 0 standbys: 1, 2
2017-05-22 15:21:19.184151 mon.0 [INF] mgrmap e224789: active: 0 standbys: 1, 2
2017-05-22 15:21:20.186720 mon.0 [INF] mgrmap e224790: active: 0 standbys: 1, 2
2017-05-22 15:21:21.195066 mon.0 [INF] mgrmap e224791: active: 0 standbys: 1, 2


Unfortunately, I could not find any information about it on the Internet. Health is OK. What does it mean?
 

hawk128

New Member
May 22, 2017
11
0
1
36
It is not only enoX. The interfaces could have different names now.

One more bug. With Russian locale there are perl wide symbol or something similar to it error during move disk or migration process in GUI. I think it is because of name of month in the beginning of each line (date).
 

hawk128

New Member
May 22, 2017
11
0
1
36
One more possible issue:

I use one VM with ubuntu and nginx as unicast iptv cache server. It works really well on Proxmox 4.4-5/c43015a5 however I tried to move it to Proxmox 5.0-10/0d270679 and have unpredictable net frozen issue. I can do link down / link up in Proxmox GUI on net interface of this VM and it continue to work. The traf is about 1 Gb out and 350 Mb in. No even one error in all logs (VM and host). Net and HDD is virtio.
 

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
3,399
528
113
Hello, we are currently testing Proxmox VE 5.0 Beta 1 and I am getting these message almost every second when I run "ceph -w":
2017-05-22 15:21:14.184174 mon.0 [INF] mgrmap e224786: active: 0 standbys: 1, 2
2017-05-22 15:21:15.186787 mon.0 [INF] mgrmap e224787: active: 0 standbys: 1, 2
2017-05-22 15:21:16.195813 mon.0 [INF] mgrmap e224788: active: 0 standbys: 1, 2
2017-05-22 15:21:19.184151 mon.0 [INF] mgrmap e224789: active: 0 standbys: 1, 2
2017-05-22 15:21:20.186720 mon.0 [INF] mgrmap e224790: active: 0 standbys: 1, 2
2017-05-22 15:21:21.195066 mon.0 [INF] mgrmap e224791: active: 0 standbys: 1, 2


Unfortunately, I could not find any information about it on the Internet. Health is OK. What does it mean?
Ceph >= Kraken has a new service daemon called mgr (short for "manager"). It is intended to replace the old calamari dashboard, but allow loading arbitrary python "modules" to provide additional custom functionality. The current state in Luminous is rather sad - it gets started out of the box, but does not really work. The next iteration of our Ceph packages will probably no longer enable it out of the box, until further improvements have happened.

The log messages just indicate that you have three mgr instances running (one per monitor), with mgr.0 being the currently active one.
 
  • Like
Reactions: camouflageX

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
3,399
528
113
It is not only enoX. The interfaces could have different names now.
yes - PVE (5) supports ethX, enXYZ, and ibX (as physical interfaces). the statistic data collection is missing the latter two.

One more bug. With Russian locale there are perl wide symbol or something similar to it error during move disk or migration process in GUI. I think it is because of name of month in the beginning of each line (date).
could you file a bug for this? please include the error messages, logs, and "pveversion -v". thanks.[/QUOTE]
 

BloodyIron

Member
Jan 14, 2013
193
4
18
it.lanified.com
Is what I'm asking too much? :(


Could we get the roadmap page to better spell out what the advantages of the new elements in 5.0 are? When "Debian Stretch" or Linux Kernel 4.10 are mentioned, I don't see how that impacts me since I don't really have the time to spend a side by side comparison between my Proxmox 4.4 setup, and where 5.0 is going.

Also, please try to avoid "this, that, the other, etc" too (avoid the etc part). I would like plenty of info to sink my teeth into, but I also know it's not productive to go extremely granular either.

I've found past major upgrades to be worthwhile, but I don't really see why I should eventually move to 5.0 based on current amount of info.
 

hawk128

New Member
May 22, 2017
11
0
1
36
yes - PVE (5) supports ethX, enXYZ, and ibX (as physical interfaces). the statistic data collection is missing the latter two.



could you file a bug for this? please include the error messages, logs, and "pveversion -v". thanks.
When I migrate a VM:


Stop
Wide character in print at /usr/share/perl5/PVE/AbstractMigrate.pm line 37.
май 24 01:12:57 starting migration of VM 103 to node 'pve3' (91.219.164.4)
Wide character in print at /usr/share/perl5/PVE/AbstractMigrate.pm line 37.
май 24 01:12:57 copying disk images
Wide character in print at /usr/share/perl5/PVE/AbstractMigrate.pm line 37.
май 24 01:12:57 starting VM 103 on remote node 'pve3'
Wide character in print at /usr/share/perl5/PVE/AbstractMigrate.pm line 37.
май 24 01:12:58 start failed: command '/usr/bin/kvm -id 103 -chardev 'socket,id=qmp,path=/var/run/qemu-server/103.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -pidfile /var/run/qemu-server/103.pid -daemonize -smbios 'type=1,uuid=f02ea4a1-64aa-4f82-9d0e-480a8eb3d114' -name ititv -smp '2,sockets=1,cores=2,maxcpus=2' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vga cirrus -vnc unix:/var/run/qemu-server/103.vnc,x509,password -cpu kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,enforce -m 8192 -k en-us -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:92a0f07a8b93' -drive 'if=none,id=drive-ide2,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -drive 'file=/mnt/pve/border-1T/images/103/vm-103-disk-1.qcow2,if=none,id=drive-virtio0,format=qcow2,cache=none,aio=native,detect-zeroes=on' -device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap103i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=5A:48:C2:1D:07:F1,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300' -machine 'type=pc-i440fx-2.9' -incoming unix:/run/qemu-server/103.migrate -S' failed: exit code 1
Wide character in print at /usr/share/perl5/PVE/AbstractMigrate.pm line 37.
май 24 01:12:58 ERROR: online migrate failure - command '/usr/bin/ssh -o 'BatchMode=yes' root@91.219.164.4 qm start 103 --skiplock --migratedfrom pve2 --migration_type secure --stateuri unix --machine pc-i440fx-2.9' failed: exit code 255
Wide character in print at /usr/share/perl5/PVE/AbstractMigrate.pm line 37.
май 24 01:12:58 aborting phase 2 - cleanup resources
Wide character in print at /usr/share/perl5/PVE/AbstractMigrate.pm line 37.
май 24 01:12:58 migrate_cancel
Wide character in print at /usr/share/perl5/PVE/AbstractMigrate.pm line 37.
май 24 01:12:59 ERROR: migration finished with problems (duration 00:00:03)
TASK ERROR: migration problems





pveversion -v
proxmox-ve: 5.0-7 (running kernel: 4.10.8-1-pve)
pve-manager: 5.0-10 (running version: 5.0-10/0d270679)
pve-kernel-4.10.8-1-pve: 4.10.8-7
libpve-http-server-perl: 2.0-4
lvm2: 2.02.168-pve2
corosync: 2.4.2-pve2
libqb0: 1.0.1-1
pve-cluster: 5.0-7
qemu-server: 5.0-4
pve-firmware: 2.0-2
libpve-common-perl: 5.0-12
libpve-guest-common-perl: 2.0-1
libpve-access-control: 5.0-4
libpve-storage-perl: 5.0-3
pve-libspice-server1: 0.12.8-3
vncterm: 1.4-1
pve-docs: 5.0-1
pve-qemu-kvm: 2.9.0-1
pve-container: 2.0-6
pve-firewall: 3.0-1
pve-ha-manager: 2.0-1
ksm-control-daemon: not correctly installed
glusterfs-client: 3.8.8-1
lxc-pve: 2.0.8-1
lxcfs: 2.0.7-pve1
criu: 2.11.1-1~bpo90
novnc-pve: 0.5-9
smartmontools: 6.5+svn4324-1
openvswitch-switch: 2.7.0-2




Migration works even with this error.


However I am worried much more about last bug. Because of this freeze I can not migrate completely to Proxmox 5.0. What can it be? And how can I catch it without errors in logs?
 

dcsapak

Proxmox Staff Member
Staff member
Feb 1, 2016
3,789
345
83
31
Vienna
май 24 01:12:57 starting VM 103 on remote node 'pve3'
Wide character in print at /usr/share/perl5/PVE/AbstractMigrate.pm line 37.
май 24 01:12:58 start failed: command '/usr/bin/kvm -id 103 -chardev 'socket,id=qmp,path=/var/run/qemu-server/103.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -pidfile /var/run/qemu-server/103.pid -daemonize -smbios 'type=1,uuid=f02ea4a1-64aa-4f82-9d0e-480a8eb3d114' -name ititv -smp '2,sockets=1,cores=2,maxcpus=2' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vga cirrus -vnc unix:/var/run/qemu-server/103.vnc,x509,password -cpu kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,enforce -m 8192 -k en-us -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:92a0f07a8b93' -drive 'if=none,id=drive-ide2,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -drive 'file=/mnt/pve/border-1T/images/103/vm-103-disk-1.qcow2,if=none,id=drive-virtio0,format=qcow2,cache=none,aio=native,detect-zeroes=on' -device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap103i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=5A:48:C2:1D:07:F1,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300' -machine 'type=pc-i440fx-2.9' -incoming unix:/run/qemu-server/103.migrate -S' failed: exit code 1
there should be a corresponding start task on the target node with a more specific error, can you post this?
 

hawk128

New Member
May 22, 2017
11
0
1
36
there should be a corresponding start task on the target node with a more specific error, can you post this?
I am not sure what you are talking about? This id the output of migration node task.
Could you be more specific?
 

dcsapak

Proxmox Staff Member
Staff member
Feb 1, 2016
3,789
345
83
31
Vienna
on the target node (where you wanted to migrate), there should also be a "start" task for this migration. there should be more details on why it did not work
 

hawk128

New Member
May 22, 2017
11
0
1
36
Oh. I see.
I did not explain well.
The migration itsrlf works well. There is only a error message: Wide character in print at /usr/share/perl5/PVE/AbstractMigrate.pm line 37.
I believe it comes because of date in this log comes from system not on English.

So, it is just small issue about some rubbish in the logs.

About the second problem - it seems that something wrong with virtio net as with e1000 it works fine so far but it is still under test.
 

hawk128

New Member
May 22, 2017
11
0
1
36
Hi,

I still have real problems with virtio net in Proxmox 5.0 under load.

I tried to use E1000 - it works without this problem but it takes much more CPU as the traf is about 1 - 1.2 Gbit / sec.

When I use virtio - it stops to transferring data occasionally, usually under almost full load.
I use script which disconnects and connects again the net on this VM and it helps...

This VM with virtio worked well on Proxmox 4.
I also tried it on 3 different servers (all of them in one cluster now.).

I also tried to use Linux bond and openvswitch one. The same result...

Do you have any ideas?

Yury.

proxmox-ve: 5.0-9 (running kernel: 4.10.11-1-pve)
pve-manager: 5.0-10 (running version: 5.0-10/0d270679)
pve-kernel-4.10.11-1-pve: 4.10.11-9
libpve-http-server-perl: 2.0-4
lvm2: 2.02.168-pve2
corosync: 2.4.2-pve2
libqb0: 1.0.1-1
pve-cluster: 5.0-7
qemu-server: 5.0-4
pve-firmware: 2.0-2
libpve-common-perl: 5.0-12
libpve-guest-common-perl: 2.0-1
libpve-access-control: 5.0-4
libpve-storage-perl: 5.0-3
pve-libspice-server1: 0.12.8-3
vncterm: 1.4-1
pve-docs: 5.0-1
pve-qemu-kvm: 2.9.0-1
pve-container: 2.0-6
pve-firewall: 3.0-1
pve-ha-manager: 2.0-1
ksm-control-daemon: not correctly installed
glusterfs-client: 3.8.8-1
lxc-pve: 2.0.8-1
lxcfs: 2.0.7-pve1
criu: 2.11.1-1~bpo90
novnc-pve: 0.5-9
smartmontools: 6.5+svn4324-1
openvswitch-switch: 2.7.0-2
 

wolfgang

Proxmox Staff Member
Staff member
Oct 1, 2014
4,763
316
83
The beta phase is over please update to stable and if the problem still exist open a new thread.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!