Thanks but that's exactly where I started before posting the question to the forum and to be honest it didn't help me much but I eventually got the issue resolved using Shorewall.
Hello,
First, I am aware that they are numerous posts about network bridge but they all seem to have different ideas and I wanted to be sure that I get the correct procedure for my installation.
I have three (3) Proxmox 3.1 KVM Hypervisor nodes each with 2 NICs, there are a number of VMs...
OK...I followed http://pve.proxmox.com/wiki/DRBD and was able to get DRBD running and was able to create a VM and do an offline migration in about 1 sec. However, the time I attempted an online migration I am got some errors.
Jun 28 12:26:35 starting migration of VM 100 to node 'node1'...
Hi Spirit, I was aware that ceph needed 3 hosts minimum but wasn't aware it was the same for Sheepdog. So would you be in agreement that iit looks like the most feasible option would be NFS Storage with DRBD?
Hello,
I have a simple non-production 2-Node Cluster setup with offline migration working fine at the moment. Each Node has two separate Hard 500GB Disks (NO RAID), the first current has Proxmox installed and the 2nd is empty at the moment. Each node will only have about 3 - 4 VMs running at...
Ok I got it working via the Private LAN.
Just in case anyone wants to know.
1) I deleted the existing Cluster as per http://forum.proxmox.com/threads/11398-Full-remove-Cluster-feature and rebooted the two Nodes.
2) Then swapped the Public IP for the Private in /etc/hosts on both Nodes. So...
Hello,
I have two Proxmox Nodes:
eth0 (Public): 100Mbp/s
eth1 (Private) 1Gbp/s
When I create the cluster it defaults to the Public IP so I would like to know how to create the cluster so that online / offline migration or general communication between the two Nodes occur on the Private LAN...
OK I have removed the Raid controller and using the two disks as standalone, now the fsync/sec is much better:
root@nodea:~# pveperf
CPU BOGOMIPS: 54400.72
REGEX/SECOND: 1639976
HD SIZE: 94.49 GB (/dev/mapper/pve-root)
BUFFERED READS: 127.68 MB/sec
AVERAGE SEEK TIME...
mmenaz,
You are absolutely correct! Somehow in my mind I was thinking I should be getting ~100MB/s on a 1Gbps NIC but I keep forgetting the uplink is only 100Mbps so hence the 10-11MB/s transfer limitation.
Is there anything that can be done to improve the fsync/sec?
Regards,
Samwayne
Hello,
I have a two Node Cluster setup and trying to test offline migration of a 20G KVM VM but it seems the migration is taking about 30 mins to complete whether I create the VM with a RAW or a QCOW2 image. I tried googling and checking thru the mailing list and the forum but I haven't come up...
Ok I was following these instructions:
http://wiki.hetzner.de/index.php/Proxmox_VE/en
http://dobrev.co/proxmox-on-a-dedicated-server-static-next-hop-additional-ips-from-different-networks-2/
I guess I will delete the forwarding and make some test to confirm the IPs are routed...
Hi Marco,
Thanks this is what I was actually trying to confirm. All the VMs will have public IPs and basically I figured I needed to add the routable IP Range to both Nodes (correct me if I am wrong) for example:
1) First I need add net.ipv4.ip_forward=1 to...
Hi Tom,
Thanks for your prompt response.
Yes I am aware that a minimum of 3 Nodes are required for true HA however, this setup is only for testing at the moment and to "get my hands dirty". I really want to test live migration especially so please if you could also answer question two that...
Hello All,
I am a new member who has been following this project as well as a couple of others including OpenStack and OpenNebula for more than a year but decided that Proxmox was more suitable for our needs. I have played with the older 1.X and 2.X versions briefly via VMWare Fusion on my...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.