Thanks. I added 172.16.88.20 on VMBR1 earlier, but left out the explicit binding to eth0. I want to add a default gateway, since this subnet will be served by a different router and upstream than the existing 192.168.44.X network --but I get an error when trying to specify that:Parameter...
Added a 172.16 address to the host and rebooted. Now the host can ping the itself and the CT, but not the local KVM machine or the router.
Ping myself:
root@pve1:~# ping -c 5 172.16.88.20
PING 172.16.88.20 (172.16.88.20) 56(84) bytes of data.
64 bytes from 172.16.88.20: icmp_req=1 ttl=64...
PVE host is 192.168.X.20
Multiple CTs running fine on that subnet with various addresses.
Adding a new CT with the address 172.16.X.Y
Can not ping workstation, router, or KVM VMs on 172.16.X subnet.
Do I need to assign something in that range to the PVE host?
thank you
Decided to simplify this down to another level by eliminating VLANs from the equation.
Added venet addresses to two CTs (172.16.88.23 and 172.16.88.25) and they can ping each other.
venet0:1 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet...
Trying to get VLANs working inside CTs on Proxmox 2.3 and having no luck. In order to remove as many variables as possible, I am not traversing an external switch, merely trying to connect to CTs on the same host to each other across a new VLAN.
First, I added a new veth to each CT (using one...
Been bumping up against this on several Ubuntu 12.04 LTS containers
Setting up procps (1:3.2.8-11ubuntu6.1) ...
start: Job failed to start
invoke-rc.d: initscript procps, action "start" failed.
dpkg: error processing procps (--configure):
subprocess installed post-installation script returned...
Based on what I learned from the videos I linked to in this thread, I believe ceph has the architecture to pull this off. The FS turned out to be far more difficult to fully distribute than anticipated, but the other bits are working quite nicely. Keep an eye on this one.
This could be huge.
I've always wondered why flash isn't included on motherboards instead of being crammed into a hard drive package and subjected to all the associated I/O complexity and overhead.
SanDisk now owns these guys. First video has a good overview...
Now that we have both ceph and glusterfs options available to us, I wanted which was best for particular use cases. While I have yet to definitively answer that, I did come across a truly excellent series of talks from 2013 linux.conf.au...
I'm working on a deployment for which redundancy and uptime are more important than processing power (existing testbed runs just fine on a quad-core i5 with 8 gigs of RAM.) Is there anything out there which would allow a 3-4 machine cluster to be built in a single chassis with fencing hardware...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.