I have a cluster of 6 nodes, each containing 8x Intel SSDSC2BB016T7R for a total of 48 OSDs. each node has 384GB ram and 40 logical cpus. For some reason, this cluster performance is really low in comparison to other deployments. deploying the gitlab template took well over 5 minutes...
Hello,
some time ago we switched from vmware to proxmox. Everything was smooth and fine at the beginning and we were very happy. But these days, we have about 120 VMs connected via iSCSI to the network storage that consist of two Dell PSM4110XS. For partitioning we use shared LVM. This type of...
Here is setup I try to do:
I have LSI 2208 raid controller with cache and bbu, and 4 SSD disks connected to it. I can not change the firmware, so HBA is not an option and I also don't want to loose cache and bbu. I would like to use RAID10 setup out of these 4 ssds.
The probles is that 2208 not...
Changing sched_autogroup_enabled from 1 to 0 makes a HUGE difference in performance on busy Proxmox hosts
Also helps to modify sched_migration_cost_ns
I've tested this on Proxmox 4.x and 5.x:
echo 5000000 > /proc/sys/kernel/sched_migration_cost_ns
echo 0 >...
Hi guys,
I am struggling since 4.4-12 to 5.1-36 upgrade (to be fair it's a new deploy) due to terrible I/O performances via iSCSI (but after some testing also NFS seems affected). The problem doesn't always show up, but I have been able to reproduce it in this manner:
VM just booted up with...
I am using latest proxmox 4.4-13 , 3 cluster setup with freenas NAS box with NFS and SMB shares
NFS shares for VM disks (mounted on proxmox)
On 2 different servers, running 2 different OSes (win 7 and win 10) with different file formats - RAW vs qcow2 and even different controllers (SATA vs...
Hi
we have installed Proxmox VE 4.4 , RAID 1 10TB, enabled Cache
we need to move the VMs KVM based raw files from other server to Proxmox, so we have copied those VMs to External HDD and connected though USB3.0 and mounted to proxmox server.
we create the VMs though GUI and i start replaceing...
Hello,
we are using Proxmox with local storage on a small 3 node cluster. Now we are planning to set up a 4 node cluster with network storage (Ceph), live migration, HA and snapshot functionality. We already have some hardware laying around from dev projects. Now I would like to get some ideas...
The setup is 3 clustered Proxmox for computations, 3 clustered Ceph storage nodes,
ceph01 8*150GB ssds (1 used for OS, 7 for storage)
ceph02 8*150GB ssds (1 used for OS, 7 for storage)
ceph03 8*250GB ssds (1 used for OS, 7 for storage)
When I create a VM on proxmox node using ceph storage, I...
I am having difficulty when adding a new disk to a vm where the storage selector dropdown is timing out. I assume this is from I/O degradation so the dropdown can't get the list of available NFS or iSCI mounts.
But I can't ascertain the cause.
The NAS is connected via quad gigabit bond and the...
Hi all,
I have done some IO tests on proxmox 4.2 with iothread. Results below:
fio --description="Emulation of Intel IOmeter File Server Access Pattern"
--name=iometer --bssplit=512/10:1k/5:2k/5:4k/60:8k/2:16k/4:32k/4:64k/10
--rw=randrw --rwmixread=80 --direct=1 --size=4g --ioengine=libaio...
Upgraded 4 of 7 nodes today only to discover than especially two VMs running (Palo Alto - VM200 FWs) use much more CPU than when on pve 4.1 :(
Pic 1 here shows VM usage last 24 hour and the jump when migrated onto 4.2.22 around 17:00, the last high jump is me introducing more load on the FW...
In Proxmox 3.0, my daily snapshot-mode-backup took 37 minutes to create a 78gig backup file.
After installing Proxmox 4.1, this same virtual machine now takes an hour and 40 minutes (1:40:00) to create the same 78gig backup file. When I backup the same virtual machine using stop-mode backup, it...
7 mechanical disks in each node using xfs
3 nodes so 21 OSDs total
I've started moving journals to SSD which is only helping write performance.
CEPH nodes still running Proxmox 3.x
I have client nodes running 4.x and 3.x, both have the same issue.
Using 10G IPoIB, separate public/private...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.