>> Oops: 0010 [#4] SMP PTI
maybe related to meltdown protection ?
can you try to add in /etc/default/grub kernel options => nopti
then #update-grub
and reboot
ocfs2 is not dead (a lot of commit on the dev mailing list), but it's full of bug. (I'm using in production in some vms, and since kernel > 3.16, I have a lot of random crash ).
for vm hosting, it's better to use a shared lvm.
Sorry, I'm a lot busy with spectre and meltdown currently, I'll work on them next month. (on last proxmox 5.1)
Edit : My previous patches for proxmox4 where posted last year here:
https://pve.proxmox.com/pipermail/pve-devel/2017-February/025441.html
seem to
proxmox version ? guest os ?
they are known bug with old qemu and old guest kernel too, with virtio nic not send gratuitous arp after live migration
cpumodel=kvm64 protected you against spectre in your vm, but not meltdown. (you need to patch your guest kernel)
host kernel need to be updated to avoid that a vm access to memory of another vm.
It's allow format conversion too. (--format qcow2)
qm importdisk <vmid> <source> <storage> [OPTIONS]
Import an external disk image as an unused disk in a VM. The image format has to be supported by qemu-img(1).
<vmid>: <integer> (1 - N)
The (unique) ID of...
4/5/6 : proxmox 5 have "qm importdisk <vmid> <source> <storage>"
(import any disk format to any supported proxmox storage)
Do you have tested to enable ovmf to have uefi support ? (instead converting all to mbr)
strange, mds failover is automatic for me without any tuning
my ceph -w output:
"mds: cephfs-1/1/1 up {0=myhost1.lan=up:active}, 2 up:standby"
Note the 2 standby nodes.
are your sure that all mds daemons are running on your cluster ?