Search results

  1. B

    LizardFS anyone?

    What I didn't mention on the list because it didn't seem politic :) that the same cluster was also running a gluster 3.8.7 volume, sharded, replica 3. It endured all the same stress tests and never missed a beat. Its hosted VM's all autostarted and it healed itself with in a few minutes after...
  2. B

    LizardFS anyone?

    I've done a lot more testing since, it hasn't worked out so well. Everything is hunky dory until you actually introduce some issues and it turns out the metadata servers are really flaky. I can reliably corrupt every running VM by power-cycling the master metadata server. Copy paste from my...
  3. B

    LizardFS anyone?

    Cool, look fwd to seeing what you think of it.
  4. B

    LizardFS anyone?

    I finally got round to setting up a test bed. 3 Debian containers (1 per Proxmox Node) - 256GB of storage, backed by ZFS RAID10 - 2 GB RAM - 2 Cores - 1 master, 2 shadow servers - chunkserver - webserver - 1GB Ethernet Ran the fuse client on the proxmoxserver (3 nodes) Took me several hours to...
  5. B

    BUG?! 4.3: Snapshot -> Rollback -> Problem: VM reboots

    I lodged a bug for it: https://bugzilla.proxmox.com/show_bug.cgi?id=1193 I checked and the VM state .raw file is only 512 bytes (should be 1+ GB), so that is probably the problem. An issue with saving the snapshot maybe?
  6. B

    [SOLVED] All Online Migrations fail since latest updates

    Problem is resolved in latest pve-qemu-kvm (2.7.0.4)
  7. B

    [SOLVED] All Online Migrations fail since latest updates

    glusterfs: gluster4 volume datastore4 path /mnt/pve/gluster4 server vnb.proxmox.softlog maxfiles 1 content images There's also a bug for it: https://bugzilla.proxmox.com/show_bug.cgi?id=1178 Its being discussed on the dev list now, so maybe not necessary to continue here Thanks.
  8. B

    [SOLVED] All Online Migrations fail since latest updates

    Looks to be a gluster gfapi problem - if I switch to using the gluster fuse mount then migrations are ok.
  9. B

    [SOLVED] All Online Migrations fail since latest updates

    Oops, sorry, misunderstood the request. All three nodes should be the same: VNB: proxmox-ve: 4.3-70 (running kernel: 4.4.21-1-pve) pve-manager: 4.3-7 (running version: 4.3-7/db02a4de) pve-kernel-4.4.6-1-pve: 4.4.6-48 pve-kernel-4.4.21-1-pve: 4.4.21-70 pve-kernel-4.4.15-1-pve: 4.4.15-60...
  10. B

    [SOLVED] All Online Migrations fail since latest updates

    agent: 1 boot: c bootdisk: scsi0 cores: 2 ide0: none,media=cdrom machine: pc-i440fx-1.4 memory: 2048 name: Lindsay-Test net0: virtio=A0:7C:D5:1C:7B:3D,bridge=vmbr0 numa: 0 ostype: win7 scsi0: gluster4:301/vm-301-disk-1.qcow2,cache=writeback,size=64G scsihw: virtio-scsi-pci sockets: 1 usb1: spice...
  11. B

    [SOLVED] All Online Migrations fail since latest updates

    Online migration appears to be broken: Oct 23 12:10:27 starting migration of VM 301 to node 'vna' (192.168.5.243) Oct 23 12:10:27 copying disk images Oct 23 12:10:27 starting VM 301 on remote node 'vna' And it just hangs on the last line. Happens with all VM's Started happening since todays...
  12. B

    cannot restore to glusterfs

    Same problem here (and thanks for the bug report). There's an easier way to work round - add the gluster fuse mount as shared directory storage, restore the back up to that, then edit the vm.conf file and change the storage entries to the main gluster mount
  13. B

    CEPH storage corrupting disks when a CEPH node goes down..

    Have you git the virt group settings set? can you post your gluster volume info and gluster version? $ gluster --version $ gluster volume info
  14. B

    ScaleIO on Proxmox

    My 2-cents, I'm getting good results with gluster (3.7.11), significantly faster than ceph for me and intgrated with proxmox
  15. B

    Slow vzdump backups to NFS

    /etc/vzdump.conf has some settings for bandwidth limiting - maybe they are set?
  16. B

    Intel S2600CP boot problems

    Yes it did, no problems since.
  17. B

    Intel 530 SSD - problem with reclaiming space

    Thanks LnxBill I removed it from the ZFS pool, erased all the aprtions and ran blkdiscard on it, so that should have released any free blocks. 45 TB is a lot, through pretty normal for a ceph journal and its my understanding these devices in get to the petabyte range before failing in reality...
  18. B

    Intel 530 SSD - problem with reclaiming space

    I have two Intel 530's in two seperate nodes that are used as journals for three ceph osd's each (three journal portions on the ssd). The SSD's have been in use for 18 months, for the past two months I've also been using them as a slog/cache device for a ZFS pool. Partition layout is as...
  19. B

    Cannot start KVM

    Thanks madmanidze, I had this exact same problem this morning and that solved it.
  20. B

    Do NOT use SUSE virtio drivers from Windows Update

    Re: SUSE Block/SCSI Driver for Windows renders guest system unbootable Oh yeah, this one has caused me a lot of grief this morning. If SUSE released these to MS, they have a lot to answer for. Its doesn't show in 2008R2 unfortunately - any idea how to remove there? Thanks.