Search results

  1. D

    High available ZFS storage

    No, he was using dual-head SAS JBOD enclosures.
  2. D

    High available ZFS storage

    Are you serious? You post links with this tag "If somebody wants to try making a high available ZFS storage for Proxmox..." The blog articles refer specifically to Open Solaris architecture. They will be pretty close to 100% useless to someone who wants to do HA ZFS on a debian platform...
  3. D

    High available ZFS storage

    Mir, did you read the blog articles at all? The whole thing is for an omnios (or openindiana) based solution, not linux. If you want something linux based (for proxmox), you need to go with Zfs on Linux. Dietmar, the blog mir posted to uses pacemaker+heartbeat. I tried to use it but it did...
  4. D

    [HELP] Not work HA with Proxmox 3.2 with 3 nodes in CLUSTER

    Are are a couple of different vmware guest fence agents (dunno if they are available in debian or not...)
  5. D

    ZFS local

    I don't remember if proxmox will allow subdirectories in such a local datastore. If they do, just create (via cli) a dataset inside the top-level 'storage' one, then create each KVM in the sub-dataset. If this is not allowed, I think you're stuck with creating N different storage directories...
  6. D

    Convert running Win8 sata to virtio

    Add a small dummy disk to the VM as virtio. Boot windows. Hopefully it installs the virtio block driver. Shutdown windows. Delete fake disk. Change real disk from sata to virtio. Reboot.
  7. D

    ext4 failure, can it be caused by cache=writeback?

    I have no idea, like I said. This is why I was getting annoyed (sorry). It started off as a spurious assertion that mirrored writes eat half your IOPS and now we're off to a completely unrelated topic :(
  8. D

    ext4 failure, can it be caused by cache=writeback?

    Cesar, let me try it this way: if you are really doing random writes, chances are both drives heads will be out of position and need to be moved before the requested block(s) can be written. If so, it doesn't really matter whether the two drives heads are in different locations or not - as if...
  9. D

    ext4 failure, can it be caused by cache=writeback?

    Either there is a language issue here, or he is completely clueless. If you can cite a *single* source that backs your point of view, feel free. One last time, IOPS is from the point of view of the client, not individual spindles (or whatever.) Or, if you try to claim otherwise, since the...
  10. D

    ext4 failure, can it be caused by cache=writeback?

    I'm trying to be polite here, but this is nonsense. Any useful metric involving IOPS is related to what an application, remote host, etc, can do. No one cares how many low-level writes are done by the storage subsystem. In a very literal sense, there are two writes being done, but this should...
  11. D

    ext4 failure, can it be caused by cache=writeback?

    That is a distinction without a difference. A write comes in to the storage layer. It issues a write to block N on disk0, then issues the same write to block N on disk1. Both writes need to complete for the logical write to complete, but as they would go in parallel, the IOPS should be the same.
  12. D

    ext4 failure, can it be caused by cache=writeback?

    Only for a broken software raid implementation. Writes *should* be able to go in parallel.
  13. D

    Join two nodes Proxmox

    Crossover cables should not be necessary when using 1gb or higher - the phys are smart enough to autoconfigure.
  14. D

    Proxmox VE Ceph Server released (beta)

    Without more info, it's hard to say, but all other things being equal SAS should certainly not be slower than SATA!
  15. D

    Please Help: I cant install Proxmox, because ther are partitons on the HD.

    Should be able to boot with any linux live-cd and blow away the partition table.
  16. D

    iscsi LVM fails after boot of the host

    Dunno, sorry. All I can say is that I've seen other distros where iscsi LUNs are found fairly late in the game. What you are describing sounds like that, but hard to say for sure.
  17. D

    iscsi LVM fails after boot of the host

    Sounds like the iscsi initiator is not getting the LUN activated in time?
  18. D

    install pacemaker on a proxmox node

    zfs was not in the discussion. he stated he didn't think ha nfs could work. maybe i was harsh, but i know a number of products which do precisely this. it is not my job to spend time and effort doing his/your homework. if you believe this is not possible, the onus is on you to explain why...
  19. D

    install pacemaker on a proxmox node

    To the sweeping generalization about how HA and NFS don't and can't work. I'm sorry, but you obviously have no clue what you are talking about. There are quite a few vendors that provide enterprise HA/NFS solutions. Your explanation as to why you believe this (kernel locks and such) is...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!