Yes, but DRBD9 with drbdmanage has more features and is not 100% compatible, so you need to reinstall.Is still possible use this 2+1 node setup with new PVE 4.0 HA DRBD cluster?
Will be fixed in next upload:When taking snapshots, the hover text pretends to allow valid names like this:
Firmware is contained in pve-firmware package.new kernel does not include firmware for bnx2. look screen. after dist-upgrade + reboot, no network driver was available. we must install it with a usb drive. why firmware was not included?
Hi, yes, but if i use proxmox upgrade tutorial, i must 1st remove the old version of this package and after reboot i must install it, but i can not install it, well driver was removed and i have no access to internet. i must also install a bnx2 driver with a usb stick and can do after this the next step in proxmox upgradetutorial. you understand, what i mean?Firmware is contained in pve-firmware package.
time vzdump 100 --dumpdir /backups/ --stdexcludes --exclude-path "/var/www/.+" --exclude-path "/opt/.+" --exclude-path "/root/.+" --exclude-path "/var/vmail/.+" --compress 1
not specific of 4.0, but if the vm is under (working) HA mode, in that case it will be "restarted" on, not "moved" to, the node2In V4.0
if node1 down suddenly (unplug th power or motherboard down or just power off) did VMs moves to node2 automatically , ?
what mean "restarted on" ?not specific of 4.0, but if the vm is under (working) HA mode, in that case it will be "restarted" on, not "moved" to, the node2
if node1 down suddenly, nothing can be "moved" from there (to have business continuity, you need a cluster at software level between two vm on different nodes (or real hardware)
Marco
yes, but syntax changed from emacs regex to shell glob patterns.Is --exclude-path option is avaliable to LXC PMOX4?
The term you used, "move", implies that the VM as it is running will keep all of its running state and not miss a beat. When you do an online migrate, this is what happens. When a node goes down, however, there is no state (memory) to move so all you can do is start the VM on the other node using the same disks.what mean "restarted on" ?
if the power off for node1 and all VM in HA Is VM will still run from node2?
# lspci | egrep -i --color 'network|ethernet'
01:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5720 Gigabit Ethernet PCIe
01:00.1 Ethernet controller: Broadcom Corporation NetXtreme BCM5720 Gigabit Ethernet PCIe
02:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5720 Gigabit Ethernet PCIe
02:00.1 Ethernet controller: Broadcom Corporation NetXtreme BCM5720 Gigabit Ethernet PCIe
# modinfo bnx2
filename: /lib/modules/2.6.32-40-pve/kernel/drivers/net/bnx2.ko
version: 2.2.5lr
license: GPL
description: QLogic BCM5706/5708/5709/5716 Driver
root@proxmox:~# lspci | egrep -i --color 'network|ethernet'
03:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5720 Gigabit Ethernet PCIe
03:00.1 Ethernet controller: Broadcom Corporation NetXtreme BCM5720 Gigabit Ethernet PCIe
root@proxmox:~# modinfo bnx2
filename: /lib/modules/4.2.2-1-pve/kernel/drivers/net/ethernet/broadcom/bnx2.ko
firmware: bnx2/bnx2-rv2p-09ax-6.0.17.fw
firmware: bnx2/bnx2-rv2p-09-6.0.17.fw
firmware: bnx2/bnx2-mips-09-6.2.1b.fw
firmware: bnx2/bnx2-rv2p-06-6.0.15.fw
firmware: bnx2/bnx2-mips-06-6.2.3.fw
version: 2.2.6
license: GPL
description: QLogic BCM5706/5708/5709/5716 Driver
author: Michael Chan <mchan@broadcom.com>
srcversion: C74F867E8C9B508113772A6
alias: pci:v000014E4d0000163Csv*sd*bc*sc*i*
alias: pci:v000014E4d0000163Bsv*sd*bc*sc*i*
alias: pci:v000014E4d0000163Asv*sd*bc*sc*i*
alias: pci:v000014E4d00001639sv*sd*bc*sc*i*
alias: pci:v000014E4d000016ACsv*sd*bc*sc*i*
alias: pci:v000014E4d000016AAsv*sd*bc*sc*i*
alias: pci:v000014E4d000016AAsv0000103Csd00003102bc*sc*i*
alias: pci:v000014E4d0000164Csv*sd*bc*sc*i*
alias: pci:v000014E4d0000164Asv*sd*bc*sc*i*
alias: pci:v000014E4d0000164Asv0000103Csd00003106bc*sc*i*
alias: pci:v000014E4d0000164Asv0000103Csd00003101bc*sc*i*
depends:
intree: Y
vermagic: 4.2.2-1-pve SMP mod_unload modversions
parm: disable_msi:Disable Message Signaled Interrupt (MSI) (int)
root@proxmox:~# pveversion -v
proxmox-ve: 4.0-16 (running kernel: 4.2.2-1-pve)
pve-manager: 4.0-48 (running version: 4.0-48/0d8559d0)
pve-kernel-4.2.2-1-pve: 4.2.2-16
lvm2: 2.02.116-pve1
corosync-pve: 2.3.5-1
libqb0: 0.17.2-1
pve-cluster: 4.0-22
qemu-server: 4.0-30
pve-firmware: 1.1-7
libpve-common-perl: 4.0-29
libpve-access-control: 4.0-9
libpve-storage-perl: 4.0-25
pve-libspice-server1: 0.12.5-1
vncterm: 1.2-1
pve-qemu-kvm: 2.4-9
pve-container: 1.0-6
pve-firewall: 2.0-12
pve-ha-manager: 1.0-9
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.3-1
lxcfs: 0.9-pve2
cgmanager: 0.37-pve2
criu: 1.6.0-1
zfsutils: 0.6.5-pve4~jessie
I also ran into the same issue on a DELL R900 server. After dist-upgrade and a reboot, I didnt have network access to be able to complete the upgrade. Thank goodness for backups. I just reinstalled 3.4 and restored.Hi, yes, but if i use proxmox upgrade tutorial, i must 1st remove the old version of this package and after reboot i must install it, but i can not install it, well driver was removed and i have no access to internet. i must also install a bnx2 driver with a usb stick and can do after this the next step in proxmox upgradetutorial. you understand, what i mean?
regards
if you install kernel 4.2.2 from our repo, also install the pve-firmware package to get the firmware files.
[234643.178758] e1000e 0000:00:19.0 eth1: Detected Hardware Unit Hang:
TDH <cf>
TDT <3e>
next_to_use <3e>
next_to_clean <cf>
buffer_info[next_to_clean]:
time_stamp <1037ed086>
next_to_watch <d3>
jiffies <1037ed2f6>
next_to_watch.status <0>
MAC Status <40080083>
PHY Status <796d>
PHY 1000BASE-T Status <7c00>
PHY Extended Status <3000>
PCI Status <10>
[234648.173491] e1000e 0000:00:19.0 eth1: Reset adapter unexpectedly
[234648.174042] vmbr0: port 2(eth1) entered disabled state
[234651.640063] e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
ethtool -K eth1 tso off