Proxmox VE 4.0 released!

Hi,
Now I have this 3.4 HA cluster: 2 "big" servers with DRBD and 1 "smal" server (old notebook, only OS, no VM) for quorum.
Is still possible use this 2+1 node setup with new PVE 4.0 HA DRBD cluster?
Thanks.
 
new kernel does not include firmware for bnx2. look screen. after dist-upgrade + reboot, no network driver was available. we must install it with a usb drive. why firmware was not included?

Firmware is contained in pve-firmware package.
 
Firmware is contained in pve-firmware package.

Hi, yes, but if i use proxmox upgrade tutorial, i must 1st remove the old version of this package and after reboot i must install it, but i can not install it, well driver was removed and i have no access to internet. i must also install a bnx2 driver with a usb stick and can do after this the next step in proxmox upgradetutorial. you understand, what i mean?

regards
 
Last edited:
In V4.0
if node1 down suddenly (unplug th power or motherboard down or just power off) did VMs moves to node2 automatically , ?
 
Is --exclude-path option is avaliable to LXC PMOX4?

Exemple of my usage:
Code:
time  vzdump 100 --dumpdir /backups/ --stdexcludes --exclude-path  "/var/www/.+"  --exclude-path "/opt/.+" --exclude-path "/root/.+"  --exclude-path "/var/vmail/.+" --compress 1

I do intensive use of this option because i do a small dump of containers with "system only snapshots" and backup volatile data using rsync.

Save-me lots of time and space resources.

For migrations to me it´s a very good option when i install pmox 4 for a test every file now is inside of single image file.

Any plan to keep this feature on?

Thanks.
 
In V4.0
if node1 down suddenly (unplug th power or motherboard down or just power off) did VMs moves to node2 automatically , ?

not specific of 4.0, but if the vm is under (working) HA mode, in that case it will be "restarted" on, not "moved" to, the node2
if node1 down suddenly, nothing can be "moved" from there (to have business continuity, you need a cluster at software level between two vm on different nodes (or real hardware)

Marco
 
not specific of 4.0, but if the vm is under (working) HA mode, in that case it will be "restarted" on, not "moved" to, the node2
if node1 down suddenly, nothing can be "moved" from there (to have business continuity, you need a cluster at software level between two vm on different nodes (or real hardware)

Marco

what mean "restarted on" ?
if the power off for node1 and all VM in HA Is VM will still run from node2?
 
what mean "restarted on" ?
if the power off for node1 and all VM in HA Is VM will still run from node2?

The term you used, "move", implies that the VM as it is running will keep all of its running state and not miss a beat. When you do an online migrate, this is what happens. When a node goes down, however, there is no state (memory) to move so all you can do is start the VM on the other node using the same disks.
 
Any answer on the above question ?

I have a cluster of Dell PowerEdge R620 and R630 servers with Broadcom interfaces :
Code:
# lspci | egrep -i --color 'network|ethernet'
01:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5720 Gigabit Ethernet PCIe
01:00.1 Ethernet controller: Broadcom Corporation NetXtreme BCM5720 Gigabit Ethernet PCIe
02:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5720 Gigabit Ethernet PCIe
02:00.1 Ethernet controller: Broadcom Corporation NetXtreme BCM5720 Gigabit Ethernet PCIe

And I think they use bnx2 kernel module driver. It is muchmore difficult to do an upgrade to PVE 4.0 without network, even if you can install the driver manually, it would be painfull.

In PVE 3.4, it seems included in the kernel :
Code:
# modinfo bnx2
filename:       /lib/modules/2.6.32-40-pve/kernel/drivers/net/bnx2.ko
version:        2.2.5lr
license:        GPL
description:    QLogic BCM5706/5708/5709/5716 Driver
 
Works fine here on a fresh install

Code:
root@proxmox:~# lspci | egrep -i --color 'network|ethernet'
03:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5720 Gigabit Ethernet PCIe
03:00.1 Ethernet controller: Broadcom Corporation NetXtreme BCM5720 Gigabit Ethernet PCIe

Code:
root@proxmox:~# modinfo bnx2
filename:       /lib/modules/4.2.2-1-pve/kernel/drivers/net/ethernet/broadcom/bnx2.ko
firmware:       bnx2/bnx2-rv2p-09ax-6.0.17.fw
firmware:       bnx2/bnx2-rv2p-09-6.0.17.fw
firmware:       bnx2/bnx2-mips-09-6.2.1b.fw
firmware:       bnx2/bnx2-rv2p-06-6.0.15.fw
firmware:       bnx2/bnx2-mips-06-6.2.3.fw
version:        2.2.6
license:        GPL
description:    QLogic BCM5706/5708/5709/5716 Driver
author:         Michael Chan <mchan@broadcom.com>
srcversion:     C74F867E8C9B508113772A6
alias:          pci:v000014E4d0000163Csv*sd*bc*sc*i*
alias:          pci:v000014E4d0000163Bsv*sd*bc*sc*i*
alias:          pci:v000014E4d0000163Asv*sd*bc*sc*i*
alias:          pci:v000014E4d00001639sv*sd*bc*sc*i*
alias:          pci:v000014E4d000016ACsv*sd*bc*sc*i*
alias:          pci:v000014E4d000016AAsv*sd*bc*sc*i*
alias:          pci:v000014E4d000016AAsv0000103Csd00003102bc*sc*i*
alias:          pci:v000014E4d0000164Csv*sd*bc*sc*i*
alias:          pci:v000014E4d0000164Asv*sd*bc*sc*i*
alias:          pci:v000014E4d0000164Asv0000103Csd00003106bc*sc*i*
alias:          pci:v000014E4d0000164Asv0000103Csd00003101bc*sc*i*
depends:       
intree:         Y
vermagic:       4.2.2-1-pve SMP mod_unload modversions
parm:           disable_msi:Disable Message Signaled Interrupt (MSI) (int)
Code:
root@proxmox:~# pveversion -v
proxmox-ve: 4.0-16 (running kernel: 4.2.2-1-pve)
pve-manager: 4.0-48 (running version: 4.0-48/0d8559d0)
pve-kernel-4.2.2-1-pve: 4.2.2-16
lvm2: 2.02.116-pve1
corosync-pve: 2.3.5-1
libqb0: 0.17.2-1
pve-cluster: 4.0-22
qemu-server: 4.0-30
pve-firmware: 1.1-7
libpve-common-perl: 4.0-29
libpve-access-control: 4.0-9
libpve-storage-perl: 4.0-25
pve-libspice-server1: 0.12.5-1
vncterm: 1.2-1
pve-qemu-kvm: 2.4-9
pve-container: 1.0-6
pve-firewall: 2.0-12
pve-ha-manager: 1.0-9
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.3-1
lxcfs: 0.9-pve2
cgmanager: 0.37-pve2
criu: 1.6.0-1
zfsutils: 0.6.5-pve4~jessie
 
Hi Mortph027,

Thanks for the detailed informations. I feel reassured that this will not be a problem when migrating. I have a little doubt still, though, as you did a fresh install, and I plan to migrate from 3.4. So the first step is to upgrade to Jessie, then to install (apt-get install), the kernel 4.2.2. So, I have also to verify that the bnx2 is present in Jessie 3.16 kernel, as I will need the nekwork to install the 4.2.2 kernel...

Edit : I just verified on a standard Jessie installation, and it seems so :
# uname -a
Linux --- 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt11-1+deb8u4 (2015-09-19) x86_64 GNU/Linux
# modinfo bnx2
filename: /lib/modules/3.16.0-4-amd64/kernel/drivers/net/ethernet/broadcom/bnx2.ko
firmware: bnx2/bnx2-rv2p-09ax-6.0.17.fw
firmware: bnx2/bnx2-rv2p-09-6.0.17.fw
firmware: bnx2/bnx2-mips-09-6.2.1b.fw
firmware: bnx2/bnx2-rv2p-06-6.0.15.fw
firmware: bnx2/bnx2-mips-06-6.2.3.fw
version: 2.2.5
license: GPL
 
Last edited:
if you install kernel 4.2.2 from our repo, also install the pve-firmware package to get the firmware files.
 
Hi, yes, but if i use proxmox upgrade tutorial, i must 1st remove the old version of this package and after reboot i must install it, but i can not install it, well driver was removed and i have no access to internet. i must also install a bnx2 driver with a usb stick and can do after this the next step in proxmox upgradetutorial. you understand, what i mean?

regards

I also ran into the same issue on a DELL R900 server. After dist-upgrade and a reboot, I didnt have network access to be able to complete the upgrade. Thank goodness for backups. I just reinstalled 3.4 and restored.

 
Encountering the next little problem. dmesg is full of these

Code:
[234643.178758] e1000e 0000:00:19.0 eth1: Detected Hardware Unit Hang:
  TDH                  <cf>
  TDT                  <3e>
  next_to_use          <3e>
  next_to_clean        <cf>
buffer_info[next_to_clean]:
  time_stamp           <1037ed086>
  next_to_watch        <d3>
  jiffies              <1037ed2f6>
  next_to_watch.status <0>
MAC Status             <40080083>
PHY Status             <796d>
PHY 1000BASE-T Status  <7c00>
PHY Extended Status    <3000>
PCI Status             <10>

followed by

Code:
[234648.173491] e1000e 0000:00:19.0 eth1: Reset adapter unexpectedly
[234648.174042] vmbr0: port 2(eth1) entered disabled state
[234651.640063] e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx

There is a Windows Terminalserver running and all sessions are being disturbed massively.

According to this piece, i'll try

Code:
ethtool -K eth1 tso off

I'll let you know if this helps...
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!