Hello dear Proxmox,
for our university's institute (law...) i had to build a new server infrastructure for VMs. Since we successfully use proxmox since a couple of years now, of course i built the new infrastructure with proxmox again:
2 Proxmox VE nodes with local zfs (uses ssd for...
I have an internet connection speed problem in my VMs. The Proxmox server has an excellent connection. I installed Centos7 on the vm. Here are the speed test results of the Proxmox and a VM:
Retrieving speedtest.net configuration...
Hey everyone, I am new to the whole home server topic but I am somehow familiar with the basics I guess.
So I am running the following setup:
Latest Proxmox and OMV version on both systems.
OMV Backup NAS Server:
Raspberry Pi 4B + external Raid0 enclosure (2x 2 TB WD Red, no SMR drivex, = 4...
BBR seems to be the reason for upload speeds limited to 4.17Mbps when using speedtest-cli on kernel >= 4.9.
Until this is resolved, use iperf to test upload speeds.
PS: This is not a tutorial, but it is surely not a question.
After fighting with ZFS memory hunger, poor performance, and random reboots, I just have replaced it with mdraid (raid1), ext4, and simple qcow2 images for the VMs, stored in the ext4 file system. This setup should be the least efficient because of the multiple layers of abstraction (md and...
Consider a 3 node setup with each node having 2x 10GB & 6x 1GB Nic.
Is it more interesting to speedup replication between the nodes using 10GB interconnects and have the VM's communicate on 1GB (scenario 1) or is it more interesting to speed up VM <—> desktop access (scenario 2)
I'm facing a strange problem. I'm using latest Proxmox with Ceph storage backend (SSD only), 10Gbit network, KVM virtualization, CentOS in guest.
When I create a fresh VM with 10 GB attached Ceph storage (cache disabled, virtio drivers), I'm getting roughly these speeds in fio...
EDIT: As I find out that high IO will only result in freezing while performed on host, not guests please skip to post #19 as there are new findings.
I'm basically reposting issue that I have with backups to FreeNAS from this post...
ich sitze gerade vor "meinem" ersten Proxmox-Produktivserver und bin der Meinung, dass ZFS nicht in die Pötte kommt.
Gerade getestet mit CrystalDiskmark sind es etwa 30 MB/s 4K QD32 Read.
Ein paar Eckdaten zum Server und der Installation:
CPU: Xeon E5-1620
RAM: 64GB, ZFS Max...
I don't know why but CPU speed seems to be very low.
On a fresh install of pve5 with 2 x Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz I have :
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
I have a Proxmox host and storage.
Export NFS options
I connected storage on PVE with NSF (1 Gigabit) and can not get a record speed of more than 40 MB/s within the virtual machine. But getting a full opportunity to record the speed of...
My servers have an internal rack speed of 1Gbps but an external speed of 250Mbps. Each time the backups start, it =saturates the network card and most incoming connections to the server drop (websites unavailable, etc).
Is there a possibility, other then setting up a separate backup...