Are you serious? You post links with this tag "If somebody wants to try making a high available ZFS storage for Proxmox..." The blog articles refer specifically to Open Solaris architecture. They will be pretty close to 100% useless to someone who wants to do HA ZFS on a debian platform...
Mir, did you read the blog articles at all? The whole thing is for an omnios (or openindiana) based solution, not linux. If you want something linux based (for proxmox), you need to go with Zfs on Linux. Dietmar, the blog mir posted to uses pacemaker+heartbeat. I tried to use it but it did...
I don't remember if proxmox will allow subdirectories in such a local datastore. If they do, just create (via cli) a dataset inside the top-level 'storage' one, then create each KVM in the sub-dataset. If this is not allowed, I think you're stuck with creating N different storage directories...
Add a small dummy disk to the VM as virtio. Boot windows. Hopefully it installs the virtio block driver. Shutdown windows. Delete fake disk. Change real disk from sata to virtio. Reboot.
I have no idea, like I said. This is why I was getting annoyed (sorry). It started off as a spurious assertion that mirrored writes eat half your IOPS and now we're off to a completely unrelated topic :(
Cesar, let me try it this way: if you are really doing random writes, chances are both drives heads will be out of position and need to be moved before the requested block(s) can be written. If so, it doesn't really matter whether the two drives heads are in different locations or not - as if...
Either there is a language issue here, or he is completely clueless. If you can cite a *single* source that backs your point of view, feel free. One last time, IOPS is from the point of view of the client, not individual spindles (or whatever.) Or, if you try to claim otherwise, since the...
I'm trying to be polite here, but this is nonsense. Any useful metric involving IOPS is related to what an application, remote host, etc, can do. No one cares how many low-level writes are done by the storage subsystem. In a very literal sense, there are two writes being done, but this should...
That is a distinction without a difference. A write comes in to the storage layer. It issues a write to block N on disk0, then issues the same write to block N on disk1. Both writes need to complete for the logical write to complete, but as they would go in parallel, the IOPS should be the same.
Dunno, sorry. All I can say is that I've seen other distros where iscsi LUNs are found fairly late in the game. What you are describing sounds like that, but hard to say for sure.
zfs was not in the discussion. he stated he didn't think ha nfs could work. maybe i was harsh, but i know a number of products which do precisely this. it is not my job to spend time and effort doing his/your homework. if you believe this is not possible, the onus is on you to explain why...
To the sweeping generalization about how HA and NFS don't and can't work. I'm sorry, but you obviously have no clue what you are talking about. There are quite a few vendors that provide enterprise HA/NFS solutions. Your explanation as to why you believe this (kernel locks and such) is...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.