Ich habe hier noch ein altes Board mit einem Intel J4005 rumliegen und habe als Testsystem Proxmox draufgehauen.
Proxmox selbst läuft auf einer NVMe SSD, zudem habe ich eine weitere SATA SSD und eine HDD als ZFS Single Disk hinzugefügt.
Bei einem Test via Samba ist mir aufgefallen, dass...
Hi, I plan to build my first ceph cluster and have some newbie questions. In the beginning I will start with 5 nodes, and plan to reach 50 nodes.
Those nodes quite old (CPU E3,16GB RAM, 2x1Gbps network), so I think to gain the performance in adding more nodes but not upgrading RAM or CPU.
I just started testing the pbs backup client for some advanced backup scenarios. One question of course is how to get the maximum performance out of the server that creates backups.
In multiple larger infrastructures there are so called 'backupworkers' (VMs) who have plenty of CPU and RAM as...
Hello there lovely people.
So, as the title says, Memory Performance is really bad. I tried to debug this since 3 or 4 Weeks now and I´m all out of Ideas. In a Linux VM i get around 24GB/s with 1M BS which is around the maximum my Board/System can handle. I used the Phoronix Test Suite as a...
I am new to Proxmox/Ceph and looking into some performance issues.
5 OSD nodes and 3 Monitor nodes
Cluster vlan - 10.111.40.0/24
CPU - AMD EPYC 2144G (64 Cores)
Memory - 256GB
Storage - Dell 3.2TB NVME x 10
Network - 40 GB for Ceph Cluster
Network - 1GB for Proxmox mgmt
ich hatte vor kurzen ein Thema über "ZFS Speicher" eröffnet und habe festgestellt, dass es doch einige Kniffe gibt die man bei Proxmox beachten sollte um nicht unnötig Speicher oder Ressourcen zu verschwenden.
Danach bin ich mein System einmal durchgegangen und habe ein paar...
I have open ticket with Support on this, but I also wanted to get some feedback from the PGM Community.
We have a customer that is considering using Proxmox Mail Gateway's for their monthly invoice batching. This is mission critical email that needs to go out without fail in...
I have a small cluster with my VMs living on LVM over iSCSI on a hp msa2050 SAN. I built it back in January and it was nice and quick - didn't even bother with multipath or any tuning because even with 10+ VMs running, it was fast enough.
I left it running, doing nothing, until this...
Hello dear community.
Server features: https://prnt.sc/s3hnb0 (+ a single 2TB HDD hard drive)
Configuration of a default VM: https://prnt.sc/s3hq8c
Black screen: https://i.imgur.com/MJUMOrD.png
I have started having serious problems on my server since I start having quite a few VMs. I have...
while working in a VM and installing some stuff, I noticed that disk writes are slower than they used to be in the past when running an OS bare-metal on the same hardware without having Proxmox in between. After taking a deeper look with zpool iostat 2 I saw that the write throughput never...
I am trying to decide between using XFS or EXT4 inside KVM VMs. My goal is not to over-optimise in an early stage, but I want to make an informed file system decision and stick with that.
Ceph as backend storage
Writeback cache on VM disk
No LVM inside VM
we have a server cluster consisting of 3 nodes with EPYC 7402P 24-Core CPUs and 6 Intel Enterprise SSDs (4620) and 256GB RAM each. Also we have a 10Gbits NIC for Ceph.
SSD performance alone is fine, Jumbo frames are enabled and also iperf gives resonable results in terms of...
I have a simple question which I would like to share with because I’m interested in you point of view.
On a Proxmox server I have fast storage (SSDs or NVMe) and slow storage (SAS or SATA).
My fast choice could be to install Proxmox on fast storage and use it also for storing virtual...
syncing a newly create mdadm raid 1 (WD Red disks, 1.6T partition size, default sync speed limits, internal bitmap enabled) gets CPU load in the 2 to 2.5 range. machine gets sluggish (despite the Xeon E-2136, 6 cores, 12 threads and 32GB RAM)
stopping pvestatd lowers the load to ~1. There is...
we are running a Proxmox cluster with five nodes. Three of them are used for ceph, providing 2 pools, one with hdd, the other one with ssd. The two other nodes are used for virtualization with qemu.
We have redundant 10 GBE storage networks and we have redundant 10 GBE ceph networks...
I am new here so please forgive any forum faux pas and let me know so I don't keep doing it :-)
Also, I am originally from a Windows Hyper-V background so please feel free to correct terminology mistakes.
I am setting Proxmox up on a single physical server with 3 raid arrays (it has a HW RAID...
For a home server/nas I'm using the latests versions of Proxmox (5.4) and OMV (4.1.22-1) on recent hardware (core i3-8100, 16gb of ram, installed on a ssd...). I have only one 8TB hard-drive with no raid configuration for my data storage.
I use my previous server (intel Atom J1900, 8GB of...
I noticed a huge issue. When I try to migrate a VM to a different node I get extremely slow transfer rates.
This is unexpected since I use a dedicated Gigabit network for migration (which is unused except for migrations). Unsecure flag is set aswell.
Have a look on this migration log...