This makes confuse ... you should call it as 5.1.1 (or 5.2) . Now, the 5.1 documentation was wrong, also your press release wrong too. And all articles according to First Release of Proxmox 5.1 were wrong also :(
Then your third 5.1 :( should be 5.1.2 (or 5.3)
1 more ... the Roadmap also wrong :(
Just using Proxmox VE iso to install with its default partition.
Put Promox VE on 2 x 320GB SSD in RAID 1. The 14 HDDs for VPS in RAID 10 mode. Other disks can be removed from Server to make it simple.
After fdisk /dev/sdb then you must create a file system there:
# mkfs.ext4 /dev/sdb1
Now mounting it:
# mount /dev/sdb1 /media/RAID10/
Go to Proxmox VE Web to add your new Strorage
I think this is Proxmox VE 5 bug.
On version 4, KSM automatically active and start sharing when reach 80% of RAM.
# systemctl status ksmtuned
Unit ksmtuned.service could not be found.
#
# apt-get install ksm*
Reading package lists... Done
Building dependency tree
Reading state information...
If the CT imported from OpenVZ then you must set the network configuration manually to make LXC works.
You must:
Add MAC Address which OpenVZ doesn't require it
CIDR for IP. In OpenVZ you write 192.168.1.1 but LXC must be 192.168.1.1/24 for example
Add Gateway which OpenVZ doesn't require it
Yes possible. Just copy the image to local disks then use API to create, suspend or destroy any VM.
VM with same ID is not possible in SAME NODE, so never created./executed too.
If a node die then all VMs inside death node will die too.
The fastest and save approach is copy your GOLDEN image...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.