https://github.com/lxc/lxcfs/issues/13
Seems it finally been done? :)
I hope so. So hopefully soon all our loadavg issues are gone.
commit: https://github.com/lxc/lxcfs/commit/b04c86523b05f7b3229953d464e6a5feb385c64a
I am busy doing it at this very moment. I see no issues in my migration. I'm using proxmox 3.4.16 for one server and restoring openvz to LXC and restoring the KVM ones into proxmox 5.2.1 fine. No issues atm.
I noticed something weird also on our cloudlinux servers but it does not go down. But may help you if you can check it aswell.
Install iperf and check the network performance.
I noticed on centos, debian, ubuntu KVM servers on same node this is the performance we get:
[ ID] Interval...
We just added extra Processor in each Server.
Intel Xeon E5-2620
Now each server has 2 of these with 64GB memory.
We have few VMs on it. For eg. 1 with 14GB memory and 6 cores and selected 2 Sockets(which we assume is now using both processors as it now states 12 cores are being used. However...
Getting this with pveperf on RAID 10 with BBU and writeback and 6x 1TB Enterprise HDD Sata disks
root@vz-cpt-1:~# pveperf
CPU BOGOMIPS: 57595.92
REGEX/SECOND: 1879039
HD SIZE: 35.44 GB (/dev/mapper/pve-root)
BUFFERED READS: 60.59 MB/sec
AVERAGE SEEK TIME: 12.64 ms...
Nope larger ones still a problem. Load goes sky high. Anyway for now while I do more testing going to stick to LVM on proxmox 3.4. It happens only on Proxmox versions using LVM-thin as default for me. I dont use ZFS only HW raid controllers on our servers and every server has same issue except...
Ok tried 960G works fine:
lvcreate -L 960G -n ssd-data vmdata
Logical volume "ssd-data" created.
lvconvert --type thin-pool vmdata/ssd-data
WARNING: Converting logical volume vmdata/ssd-data to thin pool's data volume with metadata wiping.
THIS WILL DESTROY CONTENT OF LOGICAL VOLUME...
Thanks. the method on here does work: https://pve.proxmox.com/wiki/Storage:_LVM_Thin
Its just the space allocation. How much space must be left free for extents. Not sure how to work that out. If I set it to 800GB it works fine but I want the full space (976.95G ) and I dont want any errors...
Trying to create an additional disk but a bit lost by the term extents.
Here is what I did.
sgdisk -N 1 /dev/sdb
Creating new GPT entries.
The operation has completed successfully.
pvcreate --metadatasize 250k -y -ff /dev/sdb1
Physical volume "/dev/sdb1" successfully created.
vgcreate...
I couldn't deal with the fact being on older version of Proxmox so setup a new server again but added a secondary Processor.
Problem I think is solved.
24 x Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz (2 Sockets)
I was restoring small VMs though and all looks better. Going to try larger ones...
One thing though I cannot get my head around. Why has the LSI controller got something called CacheCade SSD Caching?
Is that still slower than using BBU with writeback? I've never enabled that before for testing so was wondering what that option is and why would it be there if BBU in writeback...
We want to move away from Pure SSD as its costly.
Would this work well?
2 x 1TB SSD disks in RAID 1 (For Proxmox OS, cPanel OS and Mylsq for KVMs
4 x 2TB Enterprises SATAs in RAID 10 (For /home) directory when PHP and static files exist.
Note we have HW RAID Controller with BBU using...
hi
I have 2x ssds and 4x hdd.
Also currently setup using lvm and raid1 ssd and raid10 sata x4.
I am using hwraid controller with battery backup in writeback.
Question is can I still create ssd cache using the ssds?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.