As long as your nodes are balanced (as in, have the same total capacity) mixed sizes should work. only you can answer the TBW question since only you know how much you're going to write, but bear in mind that rebalances can be pretty write intensive.
This doesnt matter much, so long as you're...
Honestly, unless there is some reason to try to shove all your interfaces to the same /24, it would probably be a lot easier to troubleshoot if you gave each network its own /24. Your config should work; I'm guessing you have masking issues.
technically yes, but consider the consequences- if communication is disrupted between the nodes for ANY reason, EACH ONE would consider itself to be the survivor.
in the smb space, quite a few. I am seeing a ton of interest; one of my customers is actively migrating their whole production environment, and I am in discussion with at least two more. in the larger space... your points are well taken.
This is not a sane approach. When you have multiple failure domains, the design should account for that- eg, two seperate DCs with a potential to disrupt connectivity should be redundant (and have a outside witness node,) not members of the same failure domain.
And again, even if you insisted...
True in theory. in practice the chances of the cluster splitting down the middle (so half the nodes only see themselves and not the other half) is so astronomically low it may as well be zero. If this is really a concern for you, you can always set your quorum minimum at 1/2n+1 so you'd get...
pve clustering requires 3 nodes. the 3rd node can be a simple quorum vote as @leesteken linked, but dont confuse that for "replication."
ZFS replication is a separate issue. see https://pve.proxmox.com/wiki/PVE-zsync
100gb is great, but bandwidth is one consideration; contention is the real enemy especially when there is a ceph rebalance storm. 1 4x25 setup will be more resilient and more dependable then 1x100. the general gist of what you want here (edit- AT MINIMUM; other networks are probably desirable as...
step 1. remove all non boot drives from your R840s. retain those for compute. examine your existing network topology as you will likely want/need to upgrade it.
step 2. buy 3 smaller and cheaper nodes. populate with at least 4 nics each. the fatter the better.
step 3. repopulate new nodes with...
This would be really concerning to me. What hardware were these connected to? those power supplies should have protected the devices on the low voltage rails from any spike or adverse condition; I'd effectively rule out its use in any meaningful application.
lots of advice, no one asked the obvious.
What do you see in dmesg to explain the fault? obvs only available before rebooting the node since your logs arent being written to your read-only file system.
Source benchmarks would be good here. I dont think they will bear out that statement.
there are a number of api calls that only work when called by root. the api mechanism requires a password.
Apologies in advance if I'm being obtuse and not seeing it in the docs
When using vzdump, I can always restore backups directly from their resulting tarballs. How do I go about restoring backups from a failed PBS instance? Is there a db/config backup mechanism I'm not seeing?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.