Hi,
I have two /29 subnets in the datacenter. The host interface vmbr0 (bridged to eth0) is configured on the first subnet. Can I configure an LXC on the second subnet?
Should I create a second bridge on the host with eth0:1 alias in the second subnet for this to work?
thanks
@ScottR Did you resolve this issue? I am having the same issue. I am not able to get noVNC console working for LXCs running on the nodes other than the node the browser GUI is running from.
Thanks LnxBil for the answers. The requirement is to setup a two nodes Proxmox cluster with live/quick migration of VMs for DR. Each node has dual E5-2630v3, 128GB RAM, 4x800GB Intel S3510 SSD and Intel X540 10GbE (dual port). Storage traffic is through a dedicated direct connection between nodes.
Thanks LnxBil. I understand the ZFS and DRBD overhead. I am testing it to make sure that it meets my requirements. Any other implications you can think of other than I/O overhead?
ZFS is for protection against data-corruption, compression and block level de-dup. DRBD or GlusterFS is for primary-primary replication to the second node for migration (live or power-down) of VMs and containers. DRBD is supposed to be better than GlusterFS for large files.
Thanks for answering my persistent queries. I certainly understand the you are recommending against using ZFS underneath a clustering/replication engine. Can you please indicate the issues I might run into with this setup? I have not seen this warning anywhere else in my research, and people...
Thanks LnxBil. I am setting up a primary-primary sync on a two node cluster so that VMs can be live migrated or at least shutdown on one and started on the other. Storage sync will be on a dedicated 10GbE network.
Is ZFS async replication reliable for immediate DR? Most replication strategies...
Thanks LnxBil. I was thinking of using zfs beneath DRBD block device. What kind of implications are there in this config? Can you please give more details?
I am trying to setup a two node Proxmox cluster with ZFS and DRBD. I read a couple of comments in the forum about configuring one zvol per VM. Is that a must? If so, how does it work for LXC?
Also, is there an official guide/wiki for setting up ZFS + DRBD on Proxmox?
thanks
Thank you Yanick!!! It worked. Appreciate the response and the detailed explanation. It is a great relief after toiling for several days to get this working.
I used the udev script from the link you provided above to set the size to 1MB and it is working.
Thanks Chris. I tried 4.2, 4.1 and 3.4. And all of them give the same error. Trying out 4.0 now. Tried swapping Intel card with a broadcom one, and it gives the same error. I will post the results with 4.0 later today.
Chris, I am running into the same errors you posted earlier (see below) on Dell FX2 servers connected Dell compellent system. Proxmox version is 4.2 latest. It has four 'Intel Corporation Ethernet Controller X710 for 10GbE backplane' NICs.
Your first post said you got everything working with...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.