Proxmox VE 5.1 released!

So apt upgrade && apt upgrade and i'm done?
No need to reboot, no need to stop/restart all VMs/CTs?

Please follow the upgrade docs, see first post of this thread.
 
good morning my friends,
my sources.list are
Code:
deb http://ftp.it.debian.org/debian stretch main contrib

# security updates
deb http://security.debian.org stretch/updates main contrib

##Proxmox VE pvetest
deb http://download.proxmox.com/debian/pve stretch pvetest

#Proxmox Ceph test
deb http://download.proxmox.com/debian/ceph-luminous stretch test

for updating to 5.1 no subscription is this right?

Code:
deb http://ftp.debian.org/debian stretch main contrib

# PVE pve-no-subscription repository provided by proxmox.com,
# NOT recommended for production use
deb http://download.proxmox.com/debian/pve stretch pve-no-subscription

# security updates
deb http://security.debian.org stretch/updates main contrib

is correct to remove the ceph line? I read that ceph is now integrated into pve repository but not sure how to upgrade from pvetest5.0+ceph luminous to pve5.1 no sub and ceph.
thnaks!

we provide our own Ceph repository, but the packages are not contained in the regular PVE repositories. if you run "pveceph install" the repository will be configured in "/etc/apt/sources.list.d/ceph.list" and the packages will be installed.
 
After upgrading from 5.0 to 5.1 cannot start any VM...
Erro Message:

Could not access KVM kernel module: No such file or directory
failed to initialize KVM: No such file or directory
TASK ERROR: start failed: command '/usr/bin/kvm -id 100 -chardev 'socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -pidfile /var/run/qemu-server/100.pid -daemonize -smbios 'type=1,uuid=e9993974-746e-47f6-bd69-201afa6752a7' -name vm05.i -smp '2,sockets=1,cores=2,maxcpus=2' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vga std -vnc unix:/var/run/qemu-server/100.vnc,x509,password -cpu kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,enforce -m 2048 -k pt-br -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -chardev 'socket,path=/var/run/qemu-server/100.qga,server,nowait,id=qga0' -device 'virtio-serial,id=qga0,bus=pci.0,addr=0x8' -device 'virtserialport,chardev=qga0,name=org.qemu.guest_agent.0' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:9a26b53c4e90' -drive 'if=none,id=drive-ide2,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -drive 'file=/dev/pve/vm-100-disk-1,if=none,id=drive-virtio0,format=raw,cache=none,aio=native,detect-zeroes=on' -device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=3A:65:31:34:37:35,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300'' failed: exit code 1

I try to modprobe kvm-intel without success:


root@vmdes02:~# modprobe kvm-intel
modprobe: ERROR: could not insert 'kvm_intel': Input/output error




I also have the exact same problem as this. also having the ZFS problem after the latest update. i ran a lscpu , here is my hardware output :

Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Thread(s) per core: 1
Core(s) per socket: 2
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 15
Model name: Intel(R) Xeon(R) CPU 5130 @ 2.00GHz
Stepping: 6
CPU MHz: 1995.061
BogoMIPS: 3990.12
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 4096K
NUMA node0 CPU(s): 0,1
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl cpuid aperfmperf pni dtes64 monitor ds_cpl vmx tm2 ssse3 cx16 xtpr pdcm dca lahf_lm tpr_shadow dtherm
 
Does anybody here experience high CPU load and CPU %wa time when performing "qm migrate VMID NodeHostname --online --with-local-disks" on Proxmox 5.1 ZFS 0.7.2?

I ran some online migration with local disks after the upgrade and found the CPU IO wait time reaching high 75-90% and the CPU load increased to 70 on an 8 threads CPU and with Samsung SSD 850 Pro 512GB ZFS RAID1 array.

Using manual zfs snapshot, send, and receive doesn't cause the high CPU IO waiting time and CPU load issue though.
 
Hello,

I have done a fresh installation on a old server (Dell PowerEdge 2950) and I have the same kvm issue (not starts).

It's activated in BIOS but I can't load module. This is lscpu:

Code:
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                8
On-line CPU(s) list:   0-7
Thread(s) per core:    1
Core(s) per socket:    4
Socket(s):             2
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 15
Model name:            Intel(R) Xeon(R) CPU           E5320  @ 1.86GHz
Stepping:              7
CPU MHz:               1861.875
CPU max MHz:           1867.0000
CPU min MHz:           1600.0000
BogoMIPS:              3723.75
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              4096K
NUMA node0 CPU(s):     0-7
Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl cpuid aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca lahf_lm tpr_shadow dtherm
 
Hello,

I have done a fresh installation on a old server (Dell PowerEdge 2950) and I have the same kvm issue (not starts).

It's activated in BIOS but I can't load module. This is lscpu:

Code:
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                8
On-line CPU(s) list:   0-7
Thread(s) per core:    1
Core(s) per socket:    4
Socket(s):             2
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 15
Model name:            Intel(R) Xeon(R) CPU           E5320  @ 1.86GHz
Stepping:              7
CPU MHz:               1861.875
CPU max MHz:           1867.0000
CPU min MHz:           1600.0000
BogoMIPS:              3723.75
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              4096K
NUMA node0 CPU(s):     0-7
Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl cpuid aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca lahf_lm tpr_shadow dtherm

please check the linked thread with a patched test kernel
 
to help trouble shoot upgrading issues, we built a 4.10-based kernel including ZFS 0.7.3, available on pvetest. you need to manually download and install it, as no meta-package pulls it in automatically:
http://download.proxmox.com/debian/...pve-kernel-4.10.17-5-pve_4.10.17-25_amd64.deb

Code:
MD5:
1e511994999244e47b8e5a1fcce82cee  pve-kernel-4.10.17-5-pve_4.10.17-25_amd64.deb
SHA256:
5b903b467445bb9ae8fd941dfebf5ad37e8f979df08a9257dd087f4be718fb20  pve-kernel-4.10.17-5-pve_4.10.17-25_amd64.deb

this 4.10 kernel will likely be that last 4.10 based kernel built, and is intended for trouble shooting purposes only (i.e., to find out whether boot related issues are caused by the switch from 4.10 to 4.13 or from ZFS 0.6.5 to 0.7!).

a 4.13.4 kernel with ZFS 0.7.3 is available in pvetest as well (pulled in automatically on upgrading if you have pvetest enabled). ZFS userspace packages are updated to 0.7.3 as well in pvetest, so make sure to upgrade those as well when testing either of the updated kernels.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!