...a few minutes later at the installer GUI: http://i.imgur.com/dTD4ntf.png
Wiki should be updated accordingly to reflect this new GUI installer option instead of referring to command prompt (example: https://pve.proxmox.com/wiki/Debugging_Installation).
Just installing a new node with Proxmox 3.4 and I noticed that the boot prompt seems to have disappeared. Since I want to use ext4 for the local LVM and not ext3, I kinda need it. I tried debug install mode, but didn't manage to get to the correct prompt from there either. Where is it gone and...
So to summarize, nobarrier and nodelalloc is the way to go then, as long as BBUs are present one way or another and it's also fine to use ext4 with Proxmox VE to make use of its benefits. Thank you very much for the help guys, really appreciated.
Note to myself and other readers: To get the...
That's exactly what I needed - thanks a lot! I also assumed that it has to be stable enough if that many distros use it as default filesystem, but I was just wondering why exactly Proxmox VE doesn't use it by default and I found some (older) reports in the forums that it would cause issues with...
Ok, I was somehow under the impression that BBUs would "fix" the dangers of data=writeback, but apparently I was wrong. But with BBUs it should still be safe to disable barriers, right? I mean at least the official documentation says so.
Thanks for your response!
As stated in my OP, the RAID controllers I use have battery backup units, so the write cache should be written to disk even in case of power loss/PSU failure. Am I right about the assumption that this will make it fine to use writeback and nobarrier or would either one...
I know this question has been asked a few times already and I was reading through every topic in this forum I could find about it, but some are outdated or only contain inconclusive answers. The question is: Is it safe or even recommended to use EXT4 as the storage file system (in my case local...
Which of those two NIC emulators (or paravirtualized network drivers) performs better with high PPS throughput to KVM guests? Google lacks results on this one and it would be interesting to know if anyone benchmarked both with Proxmox and to what kind of conclusion they came. Thanks in advance...
Thanks, now I understood that too! So the only question left would be how to split the SSD in two partitions, one for Proxmox and one for Ceph journal. Would "maxroot=128" during Proxmox install leave the rest of the space unassigned and theoretically free for a Ceph journal, assuming I have a...
Again, thanks for your replies! If I want to have two partitions on one SSD, one for Proxmox and one for Ceph journal, can I archive that with "maxroot", ie. will it leave the remaining space empty? For instance if I have a 256GB SSD and set "maxroot=128", would it leave the remaining space...
Thank you for your detailed reply, symmcom, it was really helpful! Every time I think about it, I run into new questions unfortunately.
1.) With the 1 SSD + 1 HDD per server scenario, instead of using the HDD for journal, wouldn't it result in better performance to run both the Ceph journal and...
Sorry I didn't point that out, but yes, I want to use the Ceph that comes with Proxmox, ie. run both on the same node. Initially I was looking for the best solution on how to start with just a single node and add 2 more nodes later and turn them into a Proxmox & Ceph cluster with HA fencing. It...
So it would be better to wait until the firefly Ceph release is integrated into Proxmox, so I don't have an SSD that I don't need anymore?Of course, if all my questions are answered, I could extend the Ceph Server wiki page to better elaborate on different scenarios / cluster setups, but my...
Thank you for your reply, Udo. So you're suggesting that I should use local storage until I hit 3 nodes. I could leave one SSD and one HDD idle per server, use one SSD for Proxmox and one HDD as local storage and later, when I have 3 active nodes, use the idle SSD for Ceph journal and the idle...
Thanks for your reply.
Apart from non-existent HA or redundancy, why exactly would it be so bad to start with Ceph on just 1 server? I would add a second one as soon as the first one reaches ~30-50%. Unfortunately my question about how to adjust the pool settings if more servers are being added...
Tom, thanks for your reply. I know it doesn't make much sense, but the reason I want to do this is because 3 nodes would stay idle for too long and just consume power and money, so I only want to start adding nodes when the first one isn't idle anymore, until we finally reach 3 nodes and can...
I'd like to know if it's possible to start a Ceph cluster with just 1 node, then adding a second one later and at last a third one? I know it wouldn't have any redundancy, but running 3 nodes from the start isn't suitable in my case. I think the problem would be to edit the pool config and...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.