Just attempted to patch an older testlab PVE 3.4 to latest patch levels.
Found a newer kernel pve-kernel-2.6.32-48-pve only when booting on this our openvswitch looked fine but could get traffic in/out through a bonded NIC plugged into the single vmbr1 ovs and thus no access to the ceph cluster and live migration.
Reverting to boot on pve-kernel-2.6.32-46-pve made every work fine again though...
Currently working patch level are:
Found a newer kernel pve-kernel-2.6.32-48-pve only when booting on this our openvswitch looked fine but could get traffic in/out through a bonded NIC plugged into the single vmbr1 ovs and thus no access to the ceph cluster and live migration.
Reverting to boot on pve-kernel-2.6.32-46-pve made every work fine again though...
Currently working patch level are:
# pveversion -v
proxmox-ve-2.6.32: not correctly installed (running kernel: 2.6.32-46-pve)
pve-manager: 3.4-16 (running version: 3.4-16/40ccc11c)
pve-kernel-2.6.32-46-pve: 2.6.32-177
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-3
pve-cluster: 3.0-20
qemu-server: 3.4-9
pve-firmware: 1.1-6
libpve-common-perl: 3.0-27
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-35
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-28
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1