Hello Fabian,
thank you very much for your help!
My second question was if instead of decompressing the backups just copying or moving them would be enough for dedup to work. Anyway, don't mind that, its much easier to set up a cron that decompress the backups than having to copy or move and...
Hello Fabian,
thanks! Thats a lot of really useful information. So, in my scenario one way to have dedup actually save space is to extract the backup files (and re-compress them if wanted) on the ZFS backup server, correct? From what i have read another way would be to copy or move the backups...
Hello!
Thank you very much for the information. So, if i understood correctly, the "snapshot" mode (online backups) wont ever save space over ZFS dedup because the order it makes its writes, no matter if the ZFS unit is local or shared over NFS, neither matter if backup compression is on...
Hello!
We have 2 Proxmox host and we use the included vzdump tool to backup the VMs and containers that run on them. Our backup location is a ZFS server shared over NFS to both Proxmox hosts. This ZFS server has a RAID-10 like configuration with 4 disks, 40GB SSD cache partition, 20GB RAM and...
Hello!
perhaps this can be useful. Our servers are running the same pve-manager and lxcfs as yours, and after the change to the /lib/systemd/system/lxcfs.service they still give the correct load.
root@server:~# pveversion
pve-manager/5.4-13/aee6f0ec (running kernel: 4.15.18-20-pve)...
Hello @wolfgang!
Yes, i read about that, that is why the SSD RAID 5 is not ZFS, is a good old LVM.
I been doing some testing and research and it seems that the way to get the most of the SSD disks is to create a new zpool for them, because if i add them to the current HDD zpool the speed i get...
Hello,
thank you very much! All of the virtual disks are SCSI so i will enable discard on them and set up a cron to run FStrim periodically.
As always, you have been really kind!
Best regards,
Juan
Hello!
we have 2 servers with local ZFS pools. The pools are configured with the same type of disks. They have 3 x 120GB SSD and 4 x 4TB HDD.
I set the 3 SSD as a physical RAID 5, there runs Proxmox on one partition and ZFS logs and cache on 2 other partitions. Then, with the 4 HDD I set up a...
Hello!
We have a couple of servers configured to use local ZFS pools for storage. These pools have Thin Provisioning enabled.
I used to think that Thin Provisioning + ZFS was enough to have the VMs use on physical storage the same space that they are using into themselves, but it seems I was...
Hello Oguz,
thanks! Indeed the updates get 401 error. I saw that the no-subscription packages has a warning on the docs:
It can be used for testing and non-production use. Its not recommended to run on production servers, as these packages are not always heavily tested and validated.
Our...
Hello Oguz,
we have the enterprise repositories enabled, but we don't have a subscription yet, we are going to buy it but im waiting for my boss to do it. I commented the enterprise repository and enabled the no-suscription ones. Run a apt clean, apt update and apt upgrade lxcfs. After that I...
Hello Oguz,
thank you! I just noticed that I didnt have the no-subscription repositories enabled. I added them, updated lxcfs, added the -l to
/lib/systemd/system/lxcfs.service and now the load is showing correctly.
Best regards,
Juan Correa
Hello!
we have 2 hosts running Proxmox 5.2.1. I checked and lxc-pve is version 3.0.0-3. Did a apt-get upgrade and rebooted server, after that added the -l to /lib/systemd/system/lxcfs.service, but still top shows the host load.
Do we need to upgrade anything else?
Thanks!
Juan Correa
Hello everybody!
I ended uninstalling the agent and disabling it on the VM options. So far the CPU usage has been stable and much lower than before, and we haven't faced any issues with the backups.
So, I guess so far this has been a valid solution.
Regards,
Juan Correa
Hello,
we have 2 servers running Proxmox 5.2 .
On one of those servers we have 2 VMs that run CloudLinux 7.6 and have cPanel. I noticed today that the CPU usage was higher than expected, both on the VMs and on the Proxmox Host.
After looking a bit into the VMs it was clear that the usage was...
Hello,
Thanks! I tried with 3 centos VM the fio test at the same time and the results were almost the same. I also monitored the io usage from one vm with iotop and iostat while running fio on another vm on the same physical server, and it didnt affect it too much.
Your sharing is really...
Hello!
i used fio:
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
This page was my reference: binarylane com au/support/solutions/articles/1000055889-how-to-benchmark-disk-i-o
From...
Hello,
thanks for your feedback Guletz!
We now have both servers running proxmox, and we are setting up the finishing details to the configuration.
Originally we had 2 SSD on RAID 1 for the system and the other SSD standalone for ZFS cache. Testing from a VM with fio it gave around 45K iops...
Hello!
the company where i work is adding 2 new Dell R740 servers with proxmox, to replace some old servers with Hyper-v. The servers have dual Xeon Silver CPUs, 256 GB RAM, PERC H740 raid controller, 3 SSD 120 GB and 4 HDD 4TB.
I have set them up in a cluster, with 2 SDD configured as physical...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.