Search results

  1. B

    Backing up vm's with RAW disks, size?

    Exactly that, it ensures the empty space on the file system is filed with contiguous zeros, which makes for efficient compression. I did this today with a VM hosted on a ZFS file system with compression enabled - you could see the vm file disk usage shrink as it progressed.
  2. B

    [SOLVED] What to you use in Guests? Virtio-blk or the new Virtio-scsi?

    Dumb question - How do you use the virtio-scsi drivers with Guests?
  3. B

    LizardFS anyone?

    HA I think, with moose you don't get failover or backup on the primary metadataserver unless you buy a subscription, so if it goes down, you're stuffed. I'm a bit concerned as to whether I can run chunk and metadata servers on the proxmox nodes themselves, mem/cpu requirements are probably too...
  4. B

    LizardFS anyone?

    Just looking into it myself now, so can't answer your question :) Did you give it a try?
  5. B

    Multiple IP's on Bond?

    Thanks everyone, I should have mentioned that I have dual IP's and bond working using openvswitch, but have a problem with VM network devices locking up under load, once that happens only rebooting the node fixes it. Trying to test if openvswitch is the problem.
  6. B

    Multiple IP's on Bond?

    Thanks Wolfgang, but if possible I'd prefer to stick with the proxmox ui, which doesn't allow multiple bridges with the same name.
  7. B

    Multiple IP's on Bond?

    Running ProxMox 3.4 with 3 network cards per node and I'd like to create 2 IP addresses on a bond of all ports using linux networking. Would the following be the way to do it? # network interface settings auto lo iface lo inet loopback iface eth0 inet manual iface eth2 inet manual iface...
  8. B

    ZFS NAS/Distro for use with ProxMox?

    I'm considering setting up a custom dedicated shared storage NAS using iscsi/ZFS so as to integrate with the ZFS over iSCSI storage solution in Proxmox. Is there a recommended distro for this? FreeNAS or some such? does the choice of iSCSI matter? (Comstar/istgt/IET). One of the things I'd...
  9. B

    Recommend me a configuration

    Bummer, a shame because ZFS is very flexible and nice to play with. However he is write about the lack of JBOD being an issue - I had the same problem with my original config, I setup a logical volume for each drive (6!) on the LSI drive controller and it was a real PITA, plus I lost all the...
  10. B

    Recommend me a configuration

    I don't use containers so not sure on the answer to that - but could you just allocate a drive off pool 2 and give it to the CT?
  11. B

    Recommend me a configuration

    Do you mean using linked clones off a Proxmox Template VM? Yah I've avoided that, doubt my environment is stable enough for that, I'd always be wanting to update the parent. I do do full clones off templates though.
  12. B

    Recommend me a configuration

    Not sure I understand the levels there - are you planning on creating 2 ZFS pools? Pool1: SSD480 | SSD480 Pool2: SSD1TB | SSD1TB |SSD1TB Regardless - you might want to consider enabling lz4 compression, it gives impressive results - my zfs pool has 630GB of VM's only using 373GB of space, a...
  13. B

    Proxmox Storage question

    iSCSI LUN direct and LVM are both block devices, so you can only use raw format with them - there is no filesystem for a qcow2 file to be placed on.
  14. B

    Proxmox Storage question

    Unfortunately qcow2 cannot be used with LVM or iSCSI. We use here with our three nodes, its quite functional. However random write performance is pretty dreadful with NFS. Samba actually performs better, but you have to setup the storage mounts manually.
  15. B

    Proxmox VE ZFS replication manager released (pve-zsync)

    Ok, thanks wolfgang. I'll have a play around with it.
  16. B

    Proxmox VE ZFS replication manager released (pve-zsync)

    This looks very useful Martin and I'm just in the middle of setting up a couple of ZFS storage servers, so thank you :) One thing, on the wiki you mention: What is the actual procedure for doing that? are the replicated VM's visible on the second node somehow?
  17. B

    Intel S2600CP boot problems

    Thanks Dietmar, interesting reading. I'll try it out Monday.
  18. B

    Intel S2600CP boot problems

    Am testing a converted vSphere sever with proxmox. Prior to this it never had problems with vSphere so I believe the hardware is ok. - Intel S2600CP Motherboard - 64GB RAM - New Intel 535 120GB SSD boot drive - 6 WD Velociraptor 600GB drives connected to the MB SAS ports (experimenting with...