Search results

  1. vkhera

    Gluster storage

    I think this is a special case for 2 node cluster. You can still have quorum in a 4 node cluster when one node goes away... but you do risk split brain when there is a partition of two and two. Not sure how gluster works in that situation. But basically, gluster with two nodes is a disaster...
  2. vkhera

    Gluster storage

    Gluster has its very own quorum computation. It is unrelated to proxmox quorum. The latest versions of gluster allow you to use a non-data storage node to add to the quorum decision. If you have a third small node for proxmox quorum votes, then you can use that as well for gluster.
  3. vkhera

    Gluster storage

    Personally I have had just the most horrible slow performance with Gluster. So much so that I decided to stick with local storage and lose the benefits of the shared storage. Also, with two nodes, you really have no safety if one of your nodes dies, and that node happens to be the first node you...
  4. vkhera

    qcow2 or raw in top of zfs

    Such a live migration is actually easier with ZFS send/receive. I will hope that if/when this feature is built that zfs storage is supported.
  5. vkhera

    Insane load avg, disk timeouts w/ZFS

    Has setting zfs_arc_lotsfree_percent=0 caused you any problems?
  6. vkhera

    Proxmox VE 4.4 released!

    The noVNC console works for me with Sierra's Safari. My PVE is still running 4.4-1.
  7. vkhera

    Storage "local" and "local-zfs"

    As your screenshots show, the "local" is a directory. It just happens to live on a ZFS file system. Your "local-zfs" is a ZFS storage, which means that when you create a disk, it will create a ZFS volume to store the raw data. This is the most efficient storage you can have for this purpose...
  8. vkhera

    qcow2 or raw in top of zfs

    Live migration requires a shared storage. ZFS is not shared.
  9. vkhera

    KVM disk corruption on glusterfs

    So the definitive answer I have gotten is that with 2 data nodes, if the "first" node goes down, the cluster stops working, and you get this corruption because the clients don't seem to stop trying to write. You'd think that the client would get a write error and panic or something, but it does not.
  10. vkhera

    When will KVM live / suspend migration on ZFS work?

    When you use bonded interfaces, I do not believe that a single connection will use more than the single NIC. Everything I've read about bonded interfaces is that they spread the load across multiple clients only.
  11. vkhera

    upgrade host

    I would recommend to use the built in backup/restore process rather than trying to copy the disk and config files manually.
  12. vkhera

    Poor performance with ZFS

    If you are measuring fsync's then having a cache on your RAID controller (assuming it is battery backed up and gets actually used safely) is going to make it faster. Having an SSD SLOG device on your ZFS is the way to speed up that operation in ZFS.
  13. vkhera

    Restoring vmdk into Promox 4 running ZFS

    Is your existing vm to import stored as a backup file? PVE should just do the right thin when restoring it.
  14. vkhera

    horribly slow gluster performance

    I'm running 3.5.2 as comes with PVE. From where did you get those gluster settings? I'm using the recommended "virt" settings from the gluster wiki. There the cache is disabled. I don't understand that, but that seems to be the recommended configuration.
  15. vkhera

    Switching from VMware to Proxmox. Need help designing new environment.

    The ZIL is *never* read unless it is replaying after a crash. You cannot lose data until your system crashes, and the only window of opportunity to lose the data is during the interval where the flush happens. After the SLOG device fails, further ZIL data is written to the pool as if that device...
  16. vkhera

    Switching from VMware to Proxmox. Need help designing new environment.

    Your understanding of the SLOG/ZIL is out of date. https://forums.freenas.org/index.php?threads/time-to-reevaluate-the-mirror-your-slog-zil-drive-recommendation.23445/
  17. vkhera

    Switching from VMware to Proxmox. Need help designing new environment.

    Only if the system crashes. ZFS is supposed to survive a ZIL failure -- all that data is still in RAM.
  18. vkhera

    Two Node high availability cluster on Proxmox 4.x

    Not exactly sure what you're trying to show here. If you unplug the cable between these two boxes, do they both think they're in charge? That is nonsense. There is no way to do HA without an odd number of nodes voting.
  19. vkhera

    PVE 4.1, systemd-timesyncd and CEPH (clock skew)

    I don't think the systemd timesync thing is very accurate either. When I live migrate a running VM of FreeBSD, it complains that the clock went backwards. Sometimes this causes processes to crash.
  20. vkhera

    [SOLVED] Help needed in recovering node with ZFS mirror

    This procedure should work. I did it a while ago, but I did not replace the small drive in the mirror and just left it as a single drive. You will need to issue this command to tell ZFS to use the new full size of your drives after the small drive is removed: zpool online -e rpool /dev/sda2...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!