Hallo zusammen,
ich hatte vor kurzen ein Thema über "ZFS Speicher" eröffnet und habe festgestellt, dass es doch einige Kniffe gibt die man bei Proxmox beachten sollte um nicht unnötig Speicher oder Ressourcen zu verschwenden.
Danach bin ich mein System einmal durchgegangen und habe ein paar...
Hello everyone,
I have open ticket with Support on this, but I also wanted to get some feedback from the PGM Community.
We have a customer that is considering using Proxmox Mail Gateway's for their monthly invoice batching. This is mission critical email that needs to go out without fail in...
Hello!
I have a small cluster with my VMs living on LVM over iSCSI on a hp msa2050 SAN. I built it back in January and it was nice and quick - didn't even bother with multipath or any tuning because even with 10+ VMs running, it was fast enough.
I left it running, doing nothing, until this...
Hello dear community.
Server features: https://prnt.sc/s3hnb0 (+ a single 2TB HDD hard drive)
Configuration of a default VM: https://prnt.sc/s3hq8c
Black screen: https://i.imgur.com/MJUMOrD.png
I have started having serious problems on my server since I start having quite a few VMs. I have...
Hi,
while working in a VM and installing some stuff, I noticed that disk writes are slower than they used to be in the past when running an OS bare-metal on the same hardware without having Proxmox in between. After taking a deeper look with zpool iostat 2 I saw that the write throughput never...
I am trying to decide between using XFS or EXT4 inside KVM VMs. My goal is not to over-optimise in an early stage, but I want to make an informed file system decision and stick with that.
Situation:
Ceph as backend storage
SSD storage
Writeback cache on VM disk
No LVM inside VM
CloudLinux 7...
Hi community,
we have a server cluster consisting of 3 nodes with EPYC 7402P 24-Core CPUs and 6 Intel Enterprise SSDs (4620) and 256GB RAM each. Also we have a 10Gbits NIC for Ceph.
SSD performance alone is fine, Jumbo frames are enabled and also iperf gives resonable results in terms of...
Hi,
I have a simple question which I would like to share with because I’m interested in you point of view.
On a Proxmox server I have fast storage (SSDs or NVMe) and slow storage (SAS or SATA).
My fast choice could be to install Proxmox on fast storage and use it also for storing virtual...
syncing a newly create mdadm raid 1 (WD Red disks, 1.6T partition size, default sync speed limits, internal bitmap enabled) gets CPU load in the 2 to 2.5 range. machine gets sluggish (despite the Xeon E-2136, 6 cores, 12 threads and 32GB RAM)
stopping pvestatd lowers the load to ~1. There is...
Hello @all,
we are running a Proxmox cluster with five nodes. Three of them are used for ceph, providing 2 pools, one with hdd, the other one with ssd. The two other nodes are used for virtualization with qemu.
We have redundant 10 GBE storage networks and we have redundant 10 GBE ceph networks...
I am new here so please forgive any forum faux pas and let me know so I don't keep doing it :-)
Also, I am originally from a Windows Hyper-V background so please feel free to correct terminology mistakes.
I am setting Proxmox up on a single physical server with 3 raid arrays (it has a HW RAID...
Hi
For a home server/nas I'm using the latests versions of Proxmox (5.4) and OMV (4.1.22-1) on recent hardware (core i3-8100, 16gb of ram, installed on a ssd...). I have only one 8TB hard-drive with no raid configuration for my data storage.
I use my previous server (intel Atom J1900, 8GB of...
Hey,
I noticed a huge issue. When I try to migrate a VM to a different node I get extremely slow transfer rates.
This is unexpected since I use a dedicated Gigabit network for migration (which is unused except for migrations). Unsecure flag is set aswell.
Have a look on this migration log...
Hi,
I wonder if anyone has experience and can comment maybe.
I've just spent some time reviewing a pair of lenovo servers, which have this HW Raid controller. 2 x identical nodes in a small proxmox cluster, proxmox 5.Latest.
There is no problem with the controller being recognized and...
Currently with VirtIO-SCSI (VirtIO-SCSI Single with threads) have max IOPS is ~1.8k-2.3k. But Virtio-blk-data-plane may have over 100k IOPS https://www.suse.com/media/white-paper/kvm_virtualized_io_performance.pdf
May I switch to Virtio-blk-data-plane instead VirtIO-SCSI in Proxmox?
Hi ,
Is there anyway to change read ahead of the cephfs.
According :
docs.ceph.com/docs/master/man/8/mount.ceph/
and :
lists.ceph.com/pipermail/ceph-users-ceph.com/2016-November/014553.html
(could not place hyper link - new user)
this is should be improve reading single large files.
right now...
Actually the subject says it all...
I have a server with ZFS where I have 2xSSDs in mirror where proxmox installation, L2ARC and ZIL reside. Then a bunch of 10k HDDs in a pool where the VMs are running.
On a different forum thread I have read that a) using linked clones does not affect...
Hi,
I've been doing some pre-production testing on my home server and ran into some kind of bottleneck with my storage performance, most notably on my optane drive.
When I install a Windows vm with the latest VirtIO drivers the performance is kinda dissapointing.
I've tried switching over from...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.