apt-get update && apt-get dist-upgrade this morning : bad luck!
Here is my pveversion -v :
[root@SV-VR-HB-KVM6-ENC1-ZS:~]# pveversion -v
proxmox-ve: 5.4-1 (running kernel: 4.15.18-11-pve)
pve-manager: 5.4-4 (running version: 5.4-4/97a96833)
We're evaluating Dell VRTX as a platform for a proxmox cluster.
Problems with shared PERC have been reported some years ago (2013).
Do you use Dell VRTX in a proxmox environment with current 5.2-x versions?
Can you use shared PERC correctly (shared disks)?
Are there any negative points...
Problem could be related to igb driver, like in https://forum.proxmox.com/threads/igb-driver-on-latest-kernel-4-15-17-3-pve-net-connections-over-jumbo-frames-anomalies.44555/.
We have MTU 9000 on iSCSI dedicated NICs.
Latest kernel does NOT boot cleanly on PVE 5.2 + iSCSI storages.
Boot without any problem on pve-kernel-4.15.17-2-pve.
pve-kernel-4.15.17-3-pve takes ages to boot and does NOT reconnect iSCSI targets.
root@px3-c:~# pveversion -v
(Tell me if Linbit is a better place to talk about such problem)
A new 2 nodes cluster + drbd9 Linbit is running now.
Pve 5.2, up to date.
We started VMs and moved many disks on a drbd srorage. No problem.
Some disks does NOT want to move. Same format (qcow2), on same external USB...
Same problem here, with :
- 3 nodes cluster (pve-manager/5.1-46/ae8241d4 (running kernel: 4.13.13-3-pve))
- 4 x 1GbE nics by node,
- 2 x Synology RS3617RPxs (HA setup).
- iSCSI (4 paths)
Tons of logs like yours, everything seems to be working fine but I recently noticed a 6 times slower...
pveversion below, 3 nodes cluster.
We need to stop an iSCSI SAN for maintenance. OpenMediaVault.
Before that :
- we moved all VM disk to another storage (OK),
- we uncheck "enable" in LVM storage window (OK) based on iSCSI.
- we uncheck "enable" in iSCSI storage window (OK).
Things are getting better nowadays : lastest PVE 5.1 can run Win2012r2 + Hyper-V role, and a L2 VM can start in 2012R2. Didn't try W2016 yet.
As L2 VM, a minimal debian 8 installs and boots without problem (didn't try networking yet).
Kernel 4.10.x and qemu-kvm 2.9 required?
Trying to free a node for maintenace, I need to move all VMs on other nodes.
One VM has his hdd on a local storage, and a snapshot.
Trying to migrate the VM gave en error : one disk is local.
So I moved his hdd to another shared storage : no problem.
Then migrate the VM : error! There is a...
Same thing here, but on one (and only one) node in a cluster of 3!
Failed when connecting: Failed to connect to server ( (code: 1006)) app.js:8037:21