Search results

  1. R

    ceph locked rbd after proxmox node crash

    everything is working fine except this lock problem when a node crashes. so yes, live migration works perfectly ceph permissions are : client.pxmx1 key: AQDPEGxcA+fJGxAAljqFfiQMrFthiFqic0JWEw== caps: [mon] allow r caps: [osd] allow class-read object_prefix rbd_children...
  2. R

    ceph locked rbd after proxmox node crash

    ceph version 12.2.11 (26dc3775efc7bb286a1d6d66faee0ba30ea23eee) luminous (stable) Yes root@ceph-am7-1:~# rbd info --pool c7000-pxmx1-am7 vm-201-disk-0 rbd image 'vm-201-disk-0': size 15GiB in 3840 objects order 22 (4MiB objects) block_name_prefix: rbd_data.1ec76b8b4567...
  3. R

    ceph locked rbd after proxmox node crash

    if you cant to take a look at the ceph.conf file of the ceph nodes : root@ceph-am7-1:~# cat /etc/ceph/ceph.conf [global] fsid = fe4cccf5-89cb-4922-88a3-7525bf676581 mon_initial_members = ceph-am7-1, ceph-am7-2, ceph-am7-3 mon_host = 172.18.7.51,172.18.7.52,172.18.7.53 auth_cluster_required =...
  4. R

    ceph locked rbd after proxmox node crash

    Hi Alwin, ceph storage is not on the proxmox node, so there is not ceph.conf on the proxmox nodes. storage is defined in /etc/pve/storage.cfg, which looks like this : rbd: ceph_pxmx1 content images krbd 0 monhost 172.18.7.51 172.18.7.52 172.18.7.53 pool...
  5. R

    ceph locked rbd after proxmox node crash

    Hello, I'm using a 3 node proxmox cluster (5.3.9), connected to a remote ceph cluster via a dedicated 10G network. Everything work fine, it's very reliable, but a problem occurs when a proxmox node crashes. proxmox's HA moves the VMs from the node that crashes to other nodes, and start VMs...
  6. R

    ocfs2 kernel bug

    Hello Running 4.4.98-6-pve does not seem to fix the problem. Regardes
  7. R

    ocfs2 kernel bug

    Hello, Any news on this topic ? Fabian ? Regards
  8. R

    ocfs2 kernel bug

    No it's not, the mailing list and develeppers are still very active
  9. R

    ocfs2 kernel bug

    Hi Fabian, I can not reproduce it, I just can wait for the next time it happens. I'm running proxmox on a debian host (https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Stretch)
  10. R

    ocfs2 kernel bug

    Sorry, my kernel was not completely up-to-date. Updated it to : Linux virtm7 4.13.13-4-pve #1 SMP PVE 4.13.13-35 (Mon, 8 Jan 2018 10:26:58 +0100) x86_64 GNU/Linux
  11. R

    ocfs2 kernel bug

    Hello I'm running latest proxmox version ( 5.1-41) with up-to-date kernel (Linux virtm7 4.13.13-2-pve #1 SMP PVE 4.13.13-32 (Thu, 21 Dec 2017 09:02:14 +0100) x86_64 GNU/Linux) I'm using an ocfs2 partition as shared storage between two nodes. ocfs2 version is 1.8.4-4 Today one of my two nodes...