Hello everyone,
I have tried oVirt and researched vmware. I am settling on proxmox. Here is what I have for hardware.
Here are my objectives:
- single management plain for all hosts (1 -> x )
- Not a clustered file system. vm data storage on always be on the same host, I will not need migration of vms.
- install proxmox on the same physical raid device as the vm storage. (there is no boot drive just raid)
- common network between all vm's over the private physical network.
- a way to route port 80/433 from the physical nic, to a vm ( haproxy load balancer to route traffic to vms).
- ability to use internal dns to look up private dhcp ip address of each vm so haproxy can route based on hostnames.
- http/s api to be able to deploy and control vms.
- (optional) a way to route a unique port from the physical ip to a vm ip and port. the intent is to provide ssh to the vm from the outside network. So this would look like (ssh public ip:2000 and the routing would be 10.10.10.2:22 on the vm open ssh server). Not a deal breaker.
I think most of this is do able but I want to be sure before I invest a ton of time like I did with ovirt, (never did get it working).
Any thoughts or follow up questions are welcomed.
Brad
p.s.
I am wondering how much resource usage would differ from kubernetes containers and vm's. Currently I am running in kubernetes, and I know my cpu and ram usage. By what factor would that resource requirement increase by moving to VM's?
I have tried oVirt and researched vmware. I am settling on proxmox. Here is what I have for hardware.
- 2x Intel Xeon Silver 4214
- Cores: 2x 12x 2.20 GHz (Dual 12 Core)
- RAM: 128 GB RAM
- HDDs: 6x 1TB SATA 7.2k RPM HW Raid
- IPMI/KVM
- 2 x 1gps network, 1 public, 1 private (could upgrade to 10 gbs)
Here are my objectives:
- single management plain for all hosts (1 -> x )
- Not a clustered file system. vm data storage on always be on the same host, I will not need migration of vms.
- install proxmox on the same physical raid device as the vm storage. (there is no boot drive just raid)
- common network between all vm's over the private physical network.
- a way to route port 80/433 from the physical nic, to a vm ( haproxy load balancer to route traffic to vms).
- ability to use internal dns to look up private dhcp ip address of each vm so haproxy can route based on hostnames.
- http/s api to be able to deploy and control vms.
- (optional) a way to route a unique port from the physical ip to a vm ip and port. the intent is to provide ssh to the vm from the outside network. So this would look like (ssh public ip:2000 and the routing would be 10.10.10.2:22 on the vm open ssh server). Not a deal breaker.
I think most of this is do able but I want to be sure before I invest a ton of time like I did with ovirt, (never did get it working).
Any thoughts or follow up questions are welcomed.
Brad
p.s.
I am wondering how much resource usage would differ from kubernetes containers and vm's. Currently I am running in kubernetes, and I know my cpu and ram usage. By what factor would that resource requirement increase by moving to VM's?
Last edited: