Proxmox 5 very slow

nicko

Member
Feb 25, 2018
10
1
23
47
Hi,

We have a plan to use Proxmox on production and get a subscription for it, but after some test we have realised that everything is very slow for LXC and VM.

Here what we have:

CPU(s):
8 x Intel(R) Xeon(R) CPU D-1520 @ 2.20GHz (1 Socket)

Kernel Version:
Linux 4.13.13-5-pve #1 SMP PVE 4.13.13-38

PVE Manager Version:
pve-manager/5.1-43/bdb08029

RAM 32GB

RAID1

Is it possible to speed up Proxmox or this is normal?

Thanks
 
Last edited:
Hello @nicko and welcome to pve :)

On us servers pve5 is very fast. And we 50 Servers+. So i think your HW is to low. Really only Raid1? Depending on what you do, that can't be run very fast. So please give us a bit more informations.

What Storagetype to you use?
What output generates "pveperf" ?
What services do you have running at your VM's/Container, and how many?
 
Hello,

Thanks for your help here some info:
This is the default storage type by ovh.

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 300G 0 loop
sda 8:0 0 1.8T 0 disk
├─sda1 8:1 0 1004.5K 0 part
├─sda2 8:2 0 19.5G 0 part
│ └─md2 9:2 0 19.5G 0 raid1 /
├─sda3 8:3 0 1023M 0 part [SWAP]
└─sda4 8:4 0 1.8T 0 part
└─md4 9:4 0 1.8T 0 raid1
└─pve-data 253:0 0 1.8T 0 lvm /var/lib/vz
sdb 8:16 0 1.8T 0 disk
├─sdb1 8:17 0 1004.5K 0 part
├─sdb2 8:18 0 19.5G 0 part
│ └─md2 9:2 0 19.5G 0 raid1 /
├─sdb3 8:19 0 1023M 0 part [SWAP]
└─sdb4 8:20 0 1.8T 0 part
└─md4 9:4 0 1.8T 0 raid1
└─pve-data 253:0 0 1.8T 0 lvm /var/lib/vz



# pveperf
CPU BOGOMIPS: 35202.80
REGEX/SECOND: 2430060
HD SIZE: 19.10 GB (/dev/md2)
BUFFERED READS: 157.57 MB/sec
AVERAGE SEEK TIME: 7.87 ms
FSYNCS/SECOND: 44.25
DNS EXT: 50.94 ms
DNS INT: 5.58 ms

I'm running only 1 CT

thanks
 
Last edited:
Out of curiosity, mines are:
root@pve:~# pveperf
CPU BOGOMIPS: 56001.28
REGEX/SECOND: 3578676
HD SIZE: 27.19 GB (/dev/mapper/pve-root)
BUFFERED READS: 293.17 MB/sec
AVERAGE SEEK TIME: 0.12 ms
FSYNCS/SECOND: 2447.78
DNS EXT: 14.95 ms
DNS INT: 0.43 ms

CPU is Intel(R) Xeon(R) CPU E3-1270 v3 @ 3.50GHz
 
...

# pveperf
CPU BOGOMIPS: 35202.80
REGEX/SECOND: 2430060
HD SIZE: 19.10 GB (/dev/md2)
BUFFERED READS: 157.57 MB/sec
AVERAGE SEEK TIME: 7.87 ms
FSYNCS/SECOND: 44.25
DNS EXT: 50.94 ms
DNS INT: 5.58 ms

I'm running only 1 CT

thanks

Your storage is slow, therefore also the rest is slow.

And you run on mdraid, which is not a recommended technology for Proxmox VE.
 
Your storage is slow, therefore also the rest is slow.

And you run on mdraid, which is not a recommended technology for Proxmox VE.

I did try with other systems on this server and everything is much much faster.

The problem is clearly not coming from the server, it’s ether a configuration problem or from proxmox.

Any solutions?
 
I did try with other systems on this server and everything is much much faster.

The problem is clearly not coming from the server, it’s ether a configuration problem or from proxmox.

Any solutions?

I suggest you use a ZFS Raid1 with the existing disk and add a fast SSD as ZIL cache device.
 
  • Like
Reactions: fireon
Hi,

Thanks for your reply.

If it's not storage based, yes, sure. Yet only two devices yield not enough IOPS. Your write IOPS will be as fast as the slowest of the two drives.

Was a storage based system.

I suggest you use a ZFS Raid1 with the existing disk and add a fast SSD as ZIL cache device.

To add an SSD it’s not a solution.
So this is a problem with proxmox then?

The reason we wanted to use proxmox is because is simplicity and also because there’s LXC witch normally suppose to be very fast and less resource consumption.
 
Last edited:
I realized that the diskcache is turned off could be the problem? Is so how to enable?

Also I realized the VM is slower than CT in proxmox.
 
Last edited:
I realized that the diskcache is turned off could be the problem?

If you have no batteries backup or any other power loss protection on the disk, you should not use disk cache as this can and will lead to data lost.

If you ignore this (which is a bad decision), your disks are faster.
 
Is your RAID by any change rebuilding at the moment? 44 fsyncs are only 1/3 of the theoretical maximum of a two-disk raid 1 system.
 
Hi,

I'm going to change the hard drive and redo the installation to see if I can gain speed.

What is the best configuration for Proxmox?

Raid 0 or Raid 1 ?
EXT4 or EXT3 or XSF?

Should I keep some space for the ZFS partition?

Thanks
 
Hi,

I'm going to change the hard drive and redo the installation to see if I can gain speed.

What is the best configuration for Proxmox?

Raid 0 or Raid 1 ?
EXT4 or EXT3 or XSF?
Thanks

You need to choose the hardware and the config that is suitable for your workload. Raid0 does not provide any redundancy.
 
  • Like
Reactions: fireon
You need to choose the hardware and the config that is suitable for your workload. Raid0 does not provide any redundancy.

I know Raid0 does not provide any redundancy, it's ok I have an external network backup, but is Raid0 faster in Proxmox?

Thanks
 
Nicko,

I'm new here but I'm going to try and help out.

You're seeing really slow disk performance with the mirror you've set up. If you change that to a stripe then you'll get maybe double the performance, but the new number will still suck, plus you'll be something like 4x more likely to lose data. Now, as indicated above you should be getting higher performance from your storage than you are, but you can still do better with ZFS. So a couple of suggestions:

  • It's possible the SATA ports on your motherboard aren't giving the performance you want. You can buy a SATA card inexpensively and see if that makes performance better.
  • You can configure the drives as a mirror in ZFS instead. This isn't necessarily quick, but ZFS does a pretty solid job of caching frequently used data in RAM which greatly increases read speeds - if you need more, add more ram to the ARC cache.
  • If you're using ZFS it's possible you'll see slow write speeds, but if you add a fast SSD (don't go cheap here - a consumer grade SSD won't necessarily make you happy) and tell ZFS to initially write all data to that device then you'll get much better performance. Google ZIL and SLOG for more information.
  • Since your ZFS write cache (separate log device, whatever) only needs to be big enough to handle 5 seconds worth of full-speed writes, you can partition your SSD and use the other half as an L2ARC. Basically, the ARC is s cache of frequently accessed data that's stored in RAM so it's almost instantaneous to access it. You can also declare an SSD (or a partition) as a level 2 ARC, so ZFS will store less-frequently-accessed data there, then devote some memory to tracking what's stored where. Access from an SSD will be faster than access from your mirror, so that should help too. I emphasize "should" because this stuff isn't necessarily additive or intuitive, so you won't know how well it does until you try it.
 
Last edited:
This is not a good production server, I have not seen another virtualization platform that does not suggest using RAID10, or SSDs for any decent workload. These days users wanting performance on their desktop run SSD. It sounds like you are using 2TB *consumer* spinning drives with a low budget Xeon and expecting server grade performance out of them. Just because the drive is *Red* does not mean it is enterprise capable.

If you are just testing, and dont care about uptime (which your comments suggest) and you are married to this hardware (i see a divorce in the future), I would just run single disks with the default LVM config from the installer. You can use the 2nd disk as a backup location, or put 1/2 your VMs on each disk to spread load, and use another backup location. If you still want more speed on that hardware, you can enable write back cache in the virtual drives for your VMs, but this is again potential for downtime/loss.

The use of ZFS with 2TB of space would want more RAM and a ssd zil, even then you will occasionally see low performance with just RAID1 as not every transaction is going to use the ssd. Many Xeon D systems max out at 32gb of ram - not a good candidate for long term growth.

If you plan to go in to production you would be better suited with one or both of the following:
RAID10 of 6-8 spinning drives.
RAID1 of SSDs
Or for ZFS RAID10 spinning drives, more RAM and possibly SSD cache (ie Intel Optain 900P)

I am not sure what country you are in, but ebay offers some really well spec'd machines for probably less than what you built the Xeon D system, you can get a Dell R720 with double the specs, for $600-800, even with import fees it is probably less:
https://www.ebay.com/sch/i.html?_fr...11211&rt=nc&_mPrRngCbx=1&_udlo=400&_udhi=1000

If you want to run standard LVM right out of the box and not mess with tuning, a *proper* RAID card like a H710 (not H310) will make a 4 disk RAID10 run a little better, many resellers will add these in for a small cost. If you are running windows VMs, this will work fine up to a moderate load, then it is really suggested to use SSD in some form.

If you want to run ZFS, you would not want that card, you would want a H310 standard height PCI card ($20), these servers will come with the "H310 Mini Mono" which is not good for ZFS, it cannot be re-flashed as far as I know, only the full size H310 can run IT mode firmware needed for ZFS. Using the H310 will need a 24 inch and 32-36 inch SAS cable. This would provide you with a lot more functionality and decent speed.

If you really want useful help, you need to be more specific, you mention LXC+VMs- are they windows vms, databases, exchange, file sharing, how many users connect? What kind of IOPS does your application scenario need? 200 IOPS (you probably have less now), or 2000 IOPS? Research your needs (what does the software vendor or other users suggest) and compare to existing benchmark data online, ie:
http://fibrevillage.com/storage/429-performance-comparison-of-mdadm-raid0-and-lvm-striped-mapping

https://pve.proxmox.com/wiki/System_Requirements
https://pve.proxmox.com/wiki/Performance_Tweaks
 
  • Like
Reactions: NewDude
And a follow-up here: I received my low-end server (Dell T30) last night, and I've finally got it configured and testing.

I think you need to reinstall and choose a better disk layout. I installed and told Proxmox to configure my 2 drives as a ZFS mirror, did some testing, and added a Samsung SM863 as a separate intent log, and I'm seeing:

CPU BOGOMIPS: 26496.00
REGEX/SECOND: 3705183
HD SIZE: 3556.15 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND: 8998.89
DNS EXT: 25.94 ms
DNS INT: 25.54 ms

(network latency is higher than it should be because while setting up I'm running ethernet over the power lines.)

Now, this is a low end system ($400 on Amazon) and I'm outperforming you by way more than I should be. The only advantages my hardware has over yours is twice the RAM (64G), and the fact that I installed onto ZFS and added a SLOG.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!