Recent content by escoreal

  1. E

    ACME certificates & AutoDNS - configuration question

    Quick Update on the AutoDNS Login Issue: After some debugging, it was identified that the issue stemmed from the use of quotation marks (" and ') in the environment variables within the dns_autodns.sh script. Original Variables: AUTODNS_USER="AutoDNS username" AUTODNS_PASSWORD="AutoDNS...
  2. E

    ACME certificates & AutoDNS - configuration question

    We have the same problem with PVE. Did you already found a solution?
  3. E

    repair boot (zfs) - proxmox-boot-tool didnt work.

    For the next one finding this topic, maybe a solution. Worked for me. mount --bind /run /mnt/run
  4. E

    rbd map: "RBD image feature set mismatch"

    ok, So I added "rbd default features = 5" to the ceph.conf. Default was 61 So I only have "layering" (1) and "exclusive-lock" (4). "object-map" (8), "fast-diff" (16) and "deep-flatten" (32) are now disabled by default.
  5. E

    rbd map: "RBD image feature set mismatch"

    I don't "want" VMs to use the kernel rbd driver. The question was if this is a bug? So, if I understand this right the rbd kernel module is not up to date? Will this stay this way? I just want easy direct access from the host to the volumes. If this will stay this way I could script some...
  6. E

    rbd map: "RBD image feature set mismatch"

    Just plain installation of PVE 5 with Ceph 12.1 (pveceph install..) and VMs (with volumes) created from the PVE web interface. Nothing special. So you can't reproduce this?
  7. E

    rbd map: "RBD image feature set mismatch"

    Hello, I am testing PVE 5 with Ceph (12.1) and wanted to "map" a ceph volume but I get an error. Is this a bug? Did that work with another versions of PVE or Ceph? Thanks, esco # rbd map <ceph-pool>/foo rbd: sysfs write failed RBD image feature set mismatch. Try disabling features unsupported...
  8. E

    Slow livemigration performance since 5.0

    Hello, I can reproduce this, too. But I don't this is network related. It looks like the VM gets completely "frozen" for some seconds (here 17s). Some details: Empty PVE 5 test cluster with 3 nodes and 10 GBit/s network 10 GB "clean" Ubuntu VM on ZFS Ping from outside: PING 172.20.60.128...
  9. E

    [SOLVED] Proxmox 4.0: Failed to boot after zfs install : cannot import rpool: no such pool

    I completely zeroed the disks and reinstalled. Now it works. Maybe interesting is, that the disk didn't have a partition table on the previous installation with software RAID. It was a software RAID 1 with complete disks (sda and sdb). But using ZFS shouldn't depending on the previous layout...
  10. E

    [SOLVED] Proxmox 4.0: Failed to boot after zfs install : cannot import rpool: no such pool

    # blkid -l -t TYPE=zfs_member nothing # blkid The partitions with UUID (sd{a,b}{1,2,9})
  11. E

    [SOLVED] Proxmox 4.0: Failed to boot after zfs install : cannot import rpool: no such pool

    zpool import: no pools available to import But the disk are there. With "ls /dev/.." and "cat /proc/partition" I can see them..
  12. E

    [SOLVED] Proxmox 4.0: Failed to boot after zfs install : cannot import rpool: no such pool

    Hello wolfgang, I have same problem with ZFS RAID 1. Board: http://www.supermicro.com/products/motherboard/Xeon/D/X10SDV-8C-TLN4F.cfm esco
  13. E

    KVM offline migration

    Hello, same question here. Why no offline migration with LVM? Was this dropped? Thanks, esco
  14. E

    Backup Solutions

    Hello Michele, I use LVM snapshots with storebackup since a few years for incremental backups without problems.. If you are looking for something newer to backup LVM snapshots incrementally I would take a look at zbackup or bup. But bup was a bit slow at my first test...