In my (admittedly brief) experience with the HA, that's not possible.
Before another node will bring up the VM, it has to make 100% sure the original node is truly down, to prevent data corruption. Because the node must be truly down before a secondary node will bring up the VM, there is no way...
One thing you can try before you go to the lengths of blowing away your proxmox install, when you boot proxmox, the grub loader has quite helpfully (thank you devs!) included a memtest image. Try booting into memtest and see what it sees.
Also, you're 100% sure the BIOS sees all 8GB? What...
Have you had other operating systems installed on this hardware? How much ram did they report being able to use?
This sounds an awful lot like a hardware problem.
What you want to do is adjust the next hop with policy routing.
I believe Linux even has support for discrete routing tables.
This page seems to describe what to do pretty well: http://kewl.lu/articles/policy-routing/
Is there a "best solution" for drbd volumes paired with containers, to enable online migration? Long term I would be converting to shared storage, but for now drbd makes more sense.
I've tried GFS, but ran into issues with live migration.
I'm starting to suspect that in a drbd environment...
Is there a "best solution" for drbd volumes paired with containers? Long term I would be migrating to shared storage, but for now drbd makes more sense.
The attitude isn't appreciated. I assumed the updates to the other node would happen semi-asynchronously and thus the write speeds to the local node would not be constrained by the connecting bandwidth. For example, I've learned people will go to the lengths of having to do stupid things like...
I have this same problem. Would love to get an answer, this is kind of frustrating. I'm tempted to just make everything a KVM, but it would be nice to have this working.
I see, that's unfortunate...I would think the captchas would be able to stop that, but I guess even that is problematic now. Thanks for fixing my post.
What is the deal with this forum?
I made a somewhat lengthy post updating my thread yesterday, detailing some useful information about tuning the network and drbd. And it's not showing up. This is the thread:New cluster hardware details, suggestions for edits to the wiki
I know new users are...
Ok, I finally have an opportunity to update with my progress.
I've learned quite a bit about drbd this week, and I have some knowledge to share.
First and most important thing to understand, and I'm not sure why this isn't presented in BIG BOLD LETTERS somewhere in the wiki, drbd in 'c' mode...
Remember that bridging to vmbr0 isn't the same as binding an address to it. So, the IPs of your VMs are not 'listenable' to the proxmox host.
If I were you, I would just restrict all access to the IPs bound to the proxmox host (as opposed to specific ports), allowing access only from specific...
I'm trying to determine the best way to benchmark the I/O of my proxmox servers, because let's face it...I/O is the bottleneck until you can justify the move to a real iscsi implementation with racks of spindles at your disposal.
I have two servers that are doing drbd, RAID 10 with 3.6 TB...
We're in the process of setting up a Proxmox cluster, and I wanted to share my thoughts about the hardware we chose, and suggest a few improvements to the wiki.
For our cluster, in the short term, we've decided to do three nodes, with one of those being a simple FreeNAS quorum and backup...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.