do NOT use LXC.
LXC is NOT OpenVZ alternative.
LXC s NOT OpenVZ successor.
LXC is not a true container system. - Lacks tons of features and depends on tons of ubuntu apparmor crap.
Use KVM if possible. Or do NOT upgrade your proxmox 3.x Installations to 4.x
LXC is a useless, untested...
Being not alone makes me happy, but as always not a solution to "our" problems.
what can be done ? what is the solution ? I don't want to get rid of openvz, but I also want to use Proxmox 4.x.
I can not find any other alternatives to this beautiful Proxmox we have...
That is %100 true for me...
OpenVz Is always the unwanted adopted child of Proxmox from the beginning.
Proxmox developers want to get rid of it from the first day.
Now with the new 4.x series they replaced it with LXC without thinking and testing any much further.
There are tons and tons of problems with LXC. OpenVZ have...
this is me :)
Thank you for your kind words,
shukko.com is my notepad so that's why comments are always closed.
I am very happy that you like my tutorial.
I've got around 30+ servers working in that configuration, so yes it is well tested and recommended.
Thank you again :)
I have git a problem with one of my openvz containers
I've set the disk to 25GB on proxmox:
as you can see everything os ok.
BUT, a df -h from host node:
as you can see container 144 have full host node disk
and inside container
again full host node disk...
Exact same thing happened to me twice.
UEFI H97 Chipset mobo + proxmox 3.2 does not boot, whatever I did tried.
The best way to fix this is
Install Proxmox 3.1
and upgrade it after installation.
Everything works as expected...
This is my latest server.
Opteron 8 core cpu
adaptec 6805e 256MB
8xseagate 7200 RPM disks RAID 10
No BBU but write/read caches active
256K stripe size.
CPU BOGOMIPS: 32002.08
HD SIZE: 19.69 GB (/dev/mapper/pve-root)
Memory is good.
here is more info about it:
The memory is used for read and write cache which improves the performance of the storage. The basic rule when it comes...
I did install proxmox in everyway you can imagine.
Software raid with 2 drives raid 1 , raid 10 with 4 drives, Hardware raid with 4 drives and 2 ssds in raid1 flashcaching raid, single drive without raid but zfs in 4 drives , single drive with NFS in under xfs, etc etc etc...
If you ask me today...
51 Openvz CT's and 6 KVM VM's on 6 clustered servers all HP DL 320 G7 with 4x450 Sas Raid 10 with BBU each with 16GB ram Xeon E31240
and a lot of ram and computing power still left.
This new gear with proxmox make me get rid of useless 30 Desktop Tower servers.
KVM for windows OpenVZ for linux...
try these tutorials:
I manage to convert 5 centos physical dedicated servers to openvz without any problem so it works :)
Thanks Udo but your script is not working, I don't know why.
after checking your script I found a better and easy way to achive what I need.
The command I needed is "vzlist"
1 single command:
vzlist -o hostname,ctid,ip
gives me the exact result I needed.
If anybody interested you can...
I've got a 3 host cluster proxmox system running with 12 openvz installations.
Each openvz vm have 3-5 ip addresses each.
I can find them by entering each vm and clicking network tab.
But checking all 12 vm's take time.
Is there an easy way to manage ips given to each vm?
I mean how can...
I tried this driver in 32 bit windows xp with sp3 and it's not working.
As there are no drivers for winxp inside this driver iso I tried with 32bit win2003 and win2008 drivers. Although winxp installed the driver without any problem the scsi virtuo device is marked with a yellow mark and refused...
go to proxmox panel
virtual machines > list > click on your vm > open network tab
below Network Addresses (venet) you can see your preconfigured ip address.
add more ip addresses in that box near you current ip address using commas,
10.0.0.2, 10.0.0.5, 10.0.0.17, 10.0.0.28