drbd9 replication is not thin on replicated node

Discussion in 'Proxmox VE: Installation and configuration' started by mmenaz, Feb 22, 2016.

  1. mmenaz

    mmenaz Member

    Jun 25, 2009
    Likes Received:
    Hi, prox 4.1 (enterprise), 3 node cluster with 2 node with drbd9 thin provisioned storage (package thin-provisioning-tools installed in both nodes).
    I've created a 400GB second disk on node1, and since is empty at the moment, I see "data% 0.05" with lvs.
    But if I go on the second node, I see "data% 100.00"!
    Am I missing something?
    root@prox01:~# pveversion -v
    proxmox-ve: 4.1-37 (running kernel: 4.2.8-1-pve)
    pve-manager: 4.1-13 (running version: 4.1-13/cfb599fb)
    pve-kernel-4.2.6-1-pve: 4.2.6-36
    pve-kernel-4.2.8-1-pve: 4.2.8-37
    lvm2: 2.02.116-pve2
    corosync-pve: 2.3.5-2
    libqb0: 1.0-1
    pve-cluster: 4.0-32
    qemu-server: 4.0-55
    pve-firmware: 1.1-7
    libpve-common-perl: 4.0-48
    libpve-access-control: 4.0-11
    libpve-storage-perl: 4.0-40
    pve-libspice-server1: 0.12.5-2
    vncterm: 1.2-1
    pve-qemu-kvm: 2.5-5
    pve-container: 1.0-44
    pve-firewall: 2.0-17
    pve-ha-manager: 1.0-21
    ksm-control-daemon: 1.2-1
    glusterfs-client: 3.5.2-2+deb8u1
    lxc-pve: 1.1.5-7
    lxcfs: 0.13-pve3
    cgmanager: 0.39-pve1
    criu: 1.6.0-1
    zfsutils: 0.6.5-pve7~jessie
    drbdmanage: 0.91-1
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice