we have a server cluster consisting of 3 nodes with EPYC 7402P 24-Core CPUs and 6 Intel Enterprise SSDs (4620) and 256GB RAM each. Also we have a 10Gbits NIC for Ceph.
SSD performance alone is fine, Jumbo frames are enabled and also iperf gives resonable results in terms of...
I have a simple question which I would like to share with because I’m interested in you point of view.
On a Proxmox server I have fast storage (SSDs or NVMe) and slow storage (SAS or SATA).
My fast choice could be to install Proxmox on fast storage and use it also for storing virtual...
syncing a newly create mdadm raid 1 (WD Red disks, 1.6T partition size, default sync speed limits, internal bitmap enabled) gets CPU load in the 2 to 2.5 range. machine gets sluggish (despite the Xeon E-2136, 6 cores, 12 threads and 32GB RAM)
stopping pvestatd lowers the load to ~1. There is...
we are running a Proxmox cluster with five nodes. Three of them are used for ceph, providing 2 pools, one with hdd, the other one with ssd. The two other nodes are used for virtualization with qemu.
We have redundant 10 GBE storage networks and we have redundant 10 GBE ceph networks...
I am new here so please forgive any forum faux pas and let me know so I don't keep doing it :-)
Also, I am originally from a Windows Hyper-V background so please feel free to correct terminology mistakes.
I am setting Proxmox up on a single physical server with 3 raid arrays (it has a HW RAID...
For a home server/nas I'm using the latests versions of Proxmox (5.4) and OMV (4.1.22-1) on recent hardware (core i3-8100, 16gb of ram, installed on a ssd...). I have only one 8TB hard-drive with no raid configuration for my data storage.
I use my previous server (intel Atom J1900, 8GB of...
I noticed a huge issue. When I try to migrate a VM to a different node I get extremely slow transfer rates.
This is unexpected since I use a dedicated Gigabit network for migration (which is unused except for migrations). Unsecure flag is set aswell.
Have a look on this migration log...
I wonder if anyone has experience and can comment maybe.
I've just spent some time reviewing a pair of lenovo servers, which have this HW Raid controller. 2 x identical nodes in a small proxmox cluster, proxmox 5.Latest.
There is no problem with the controller being recognized and...
Currently with VirtIO-SCSI (VirtIO-SCSI Single with threads) have max IOPS is ~1.8k-2.3k. But Virtio-blk-data-plane may have over 100k IOPS https://www.suse.com/media/white-paper/kvm_virtualized_io_performance.pdf
May I switch to Virtio-blk-data-plane instead VirtIO-SCSI in Proxmox?
Is there anyway to change read ahead of the cephfs.
(could not place hyper link - new user)
this is should be improve reading single large files.
Actually the subject says it all...
I have a server with ZFS where I have 2xSSDs in mirror where proxmox installation, L2ARC and ZIL reside. Then a bunch of 10k HDDs in a pool where the VMs are running.
On a different forum thread I have read that a) using linked clones does not affect...
I've been doing some pre-production testing on my home server and ran into some kind of bottleneck with my storage performance, most notably on my optane drive.
When I install a Windows vm with the latest VirtIO drivers the performance is kinda dissapointing.
I've tried switching over from...
I've been looking for the reason for the slowness of my Proxmox server but I have not been able to detect the problem.
I have an HP DL160 server with 2 Intel Xeon processors, 32GB RAM DDR4 and 4 4TB hard drives in RAIDZ-1 ZFS (10TB Storagen in local-zfs)
I have installed 3 VMs: 1 Ubuntu...
I've got a couple of Proxmox servers (single no HA) with SATA drives and getting a backup is a real pain.
It takes 50 minutes or so to get a 60-70GB of backup.
I'm planning to migrate a few of these servers into one new much more powerful server with the following drives/setup:
After fighting with ZFS memory hunger, poor performance, and random reboots, I just have replaced it with mdraid (raid1), ext4, and simple qcow2 images for the VMs, stored in the ext4 file system. This setup should be the least efficient because of the multiple layers of abstraction (md and...
All my servers with PVE kernel enabled show poor outgoing connections performance:
ab -n 10000 -c 1000 example.url
Requests per second: 679.38
Requests per second: 754.42
Requests per second: 692.04
Requests per second...
ich habe eine Frage bezüglich der Performance von VMs.
Aktuell laufen ca. sechs Linux KVMs und zwei FreeBSD KVMs auf einer Node mit zwei HDD Platten im Raidverbund.
Da beabsichtigt ist in nächster Zeit eine performancemäßig bessere Node zu beziehen, mitunter SSDs im...
I need some input on tuning performance on a new cluster I have setup
The new cluster has 2 pool (one for HDDs and one for SSDs). For now it's only three nodes.
I have separate networks: 1 x 1Gb/s NIC for corosync, 2 x bonded 1GB/s NICs for ceph and 1 x 1GB/s NIC for the Proxmox bridged VM's...