Thats a fair point.
Simple. Plenty of people want a simple container service offering.
Proxmox serving LXC is fantastic; and works well installed inside a kvm box to deliver this.
Take for example a scenario where you 'sell' a proxmox/lxc to a customer. Say.. 1tb storage and 32gb ram...
Ok.. kvm within kvm is never a good idea. It is very slow!
I know you run lxc direct on the host. What I am suggesting is something slightly different.
Scenario as follows:
- install proxmox on an amazon kvm server.
Then use it to serve lxc containers.
What this does is to open up an...
Would it be possible to add a feature in to an upcoming release of proxmox?
Essentially - Proxmox makes a fantastic platform for serving LXC contains when running inside a kvm server.
The model is simple:
- install proxmox into a virtual server.
- use proxmox to run lxc containers
I am wondering if there is any possibility of a fix for the kernel on version 3.4 ?
I have around 20 odd 3.4 machines which are rather difficult to move across due to the number of openvz containers!
I have found a solution using this:
Essentially I need to force a rescan of the pci bus from the vyos node.
sudo chmod 0777 /sys/bus/pci/rescan
sudo echo 1 > /sys/bus/pci/rescan
So on my default setup I have the following in the interfaces file:
iface vmbr0 inet static
So what we are...
So what you are saying is that if I take a virtual server - and internally in the guest assign a: eth0.10 device.
The eth0.10 will magically pass through vmbr0 on the host - and see the network?
I was under the impression that the bridge would drop vlan traffic?
I am working on a solution where it would be advantages to allow a guest machine to have the ability to assign its own vlans.
Essentially I will have a vyos/vyatta box running as a virtual router. Ideally the virtual router will be able to setup a new vlan on the fly.
So I guess what...
I have used supermicro - no major issue on that front.
My only thoughts on your config at present.
Performance of any system is critical. It all comes down to disk io.
ISCSI can look good, but be aware that you are limited by your Ethernet speed. You MUST go down the route of jumbo...
Re: AW: Updates for Proxmox VE 2.2 - including QEMU 1.3
Just been reading the qemu 1.3 change log.
This feature jumped out at me.
"A new block job is supported: live disk mirroring (also known as "storage migration") moves data from an image to another. A new command "block-job-complete" is...
IMHO these migrations are not always a good idea.
The reason you want to migrate is normally because a node in the cluster is overloaded.
Migrating adds load! So you actually make things worse by trying to fix it!
To be honest.. I have done this sort of thing before. I think that in raw performance vs money spent you are better off putting in a 2 u server. 8/12 sas drives and loads of ram, and a dual quad core.
You will be able to load on plenty of machines, and by and large out perform the atom CPU...