Hi there
On my SSD servers, the SSD performance is really bad.
I test the performance with
dd if=/dev/zero of=test.img bs=4k count=262133 conv=fdatasync
(I'm aware that dd is not the best tool to test performance)
System 1 (SSD + ZFS & RAID1)
SuperMicro E300-8D
32 GB RAM
1x Samsung SSD 860...
A feature to tag only selected vlans on the vm's interface would be very useful (at the moment it's either one untagged or all tagged). Is such thing planned?
Update I have replaced all disks (3 year old WD RE) with brand new WD Gold and I think the performance increased. Without any cache I'm reaching 100-150 MB/s in a vm, FSYNC on the host is now 5 times more than before the change.
Thanks for your answer
In the meantime I found out VMs are reaching 200MB/s if I enable writeback cache.
And the LVM on the host is using writeback too. So I think the performance is bad on both the host & guests without any caching.
I'm using the folowing setup:
LVM -> LUKS -> MDRAID -> HDDs
The problem is the server only has 4 slots which are all used, so SSD is not possible. Also, I don't see any possibility for Full Disk Encryption with ZFS.
Also it's strange that the host has good performance.
Just filled up the disk of one VM up to 100% and ran the dd again. Now it's around 80-90MB/s, but in my opinion that's still too slow.
Another problem is a windows VM is freezing all the time e.g. if I open the taskmanager.
As soon as it's opened, I see 100% disk usage with some 500kb/s :O
Thanks for your fast reply.
The guest uses ext4.
UUID={UUID} / ext4 errors=remount-ro 0 1
I use fstrim/discard to clean up unused space.
Will try with a non-lvm-thin vm later. But I cannot believe the difference uf 150 MB/s.
Hi there
Currently I have some performance problems in VMs runnung on my ProxMox node.
Specs of the node:
- 1x Xeon E5-2620v4
- 64 GB RAM
- 4x 1TB WD Gold
- Software RAID 10
- Full Disk Encryption
- LVM Thin for VMs
On the host, I get around 200MB/s:
(zrh1)root@vms1:~# dd if=/dev/zero...
Found out the following:
I need to manually run postmap /etc/pmg/domains on the slave (not needed on master).
I think that should be done automatically.
Hi there
I just set up a ProxMox Mail Gateway Cluster.
Sync works, Transport/Domains are set up.
Now my problem is that the master accepts mails, but not the slave: "Relay access denied". I had a short look on main.cf & /etc/pmg/domains, looks fine.
Any ideas?
Best regards
Patrick
Hi there
I'm trying to set up outbound mail relaying with IPv6. I added the network under Configuration->Mail Proxy->Networks (example: 2001:db8::/32).
But relaying over IPv6 is not possible:
Jan 25 22:38:35 mx1 postfix/smtpd[3779]: warning: 2001:db8::/32 is unavailable. unsupported...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.