Hello
Your disk is now using virtio0
Select this disk and click detach from the top
then double click the unused disk which will be on bottom of the hardware list
and reattch it using scsi (default option)
Hey My problem is now solved!
Solution: Switch all disks to Virtio SCSI from Virtio.
detach disk from vm and reattach using SCSI
Setup new boot order for SCSI
Out of 20 KVM virtual Servers all running Centos 6 only one server suffered some minor data loss. But restored.
No problems with...
After upgrading to several proxmox 6 boxes to proxmox 7.1 I've got exactly the same problem.
but only on centos 6 KVM virtual machines.
Centos 7 and Centos 8 / AlmaLinux 8 KVM Virtual servers are not having this problem.
Centos 7 KVM Virtual Servers Going into Read Only randomly after giving...
do NOT use LXC.
LXC is NOT OpenVZ alternative.
LXC s NOT OpenVZ successor.
LXC is not a true container system. - Lacks tons of features and depends on tons of ubuntu apparmor crap.
Use KVM if possible. Or do NOT upgrade your proxmox 3.x Installations to 4.x
LXC is a useless, untested...
Being not alone makes me happy, but as always not a solution to "our" problems.
what can be done ? what is the solution ? I don't want to get rid of openvz, but I also want to use Proxmox 4.x.
I can not find any other alternatives to this beautiful Proxmox we have...
That is %100 true for me...
OpenVz Is always the unwanted adopted child of Proxmox from the beginning.
Proxmox developers want to get rid of it from the first day.
Now with the new 4.x series they replaced it with LXC without thinking and testing any much further.
There are tons and tons of problems with LXC. OpenVZ have...
this is me :)
Thank you for your kind words,
shukko.com is my notepad so that's why comments are always closed.
I am very happy that you like my tutorial.
I've got around 30+ servers working in that configuration, so yes it is well tested and recommended.
Thank you again :)
I have git a problem with one of my openvz containers
I've set the disk to 25GB on proxmox:
/etc/pve/openvz/144.conf
as you can see everything os ok.
BUT, a df -h from host node:
as you can see container 144 have full host node disk
and inside container
again full host node disk...
Exact same thing happened to me twice.
UEFI H97 Chipset mobo + proxmox 3.2 does not boot, whatever I did tried.
The best way to fix this is
Install Proxmox 3.1
and upgrade it after installation.
Everything works as expected...
This is my latest server.
Opteron 8 core cpu
adaptec 6805e 256MB
8xseagate 7200 RPM disks RAID 10
No BBU but write/read caches active
256K stripe size.
64GB ram
pveperf
CPU BOGOMIPS: 32002.08
REGEX/SECOND: 856289
HD SIZE: 19.69 GB (/dev/mapper/pve-root)
BUFFERED READS...
Memory is good.
here is more info about it:
http://serverfault.com/questions/450242/what-is-the-memory-module-on-a-raid-card-needed-for
The memory is used for read and write cache which improves the performance of the storage. The basic rule when it comes...
I did install proxmox in everyway you can imagine.
Software raid with 2 drives raid 1 , raid 10 with 4 drives, Hardware raid with 4 drives and 2 ssds in raid1 flashcaching raid, single drive without raid but zfs in 4 drives , single drive with NFS in under xfs, etc etc etc...
If you ask me today...
51 Openvz CT's and 6 KVM VM's on 6 clustered servers all HP DL 320 G7 with 4x450 Sas Raid 10 with BBU each with 16GB ram Xeon E31240
and a lot of ram and computing power still left.
This new gear with proxmox make me get rid of useless 30 Desktop Tower servers.
KVM for windows OpenVZ for linux...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.