According to Zima comment for:
echo "251:7 10000" > /sys/fs/cgroup/blkio/lxc/60200/blkio.throttle.read_bps_device
Will you support LXC too? Maybe an option under Advanced setting of Disk Resource?
I try to restore vzdump-openvz-941250-2020_03_03-07_57_24.tar on same server (Proxmox 3), successful.
Will try again on another server with Proxmox 4 and 5.
file /vz/dump/vzdump-openvz-941250-2020_03_02-00_37_52.tar.gz
/vz/dump/vzdump-openvz-941250-2020_03_02-00_37_52.tar.gz: gzip compressed data, from Unix, last modified: Mon Mar 2 00:37:52 2020
Now I try to backup without compression:
-rw-r--r-- 1 root root 165G Mar 3 14:17...
When I restore OpenVZ (Proxmox 3) to LXC (Proxmox 5) then error:
Even original OpenVZ size is 100GB but the backup file .tar.gz is 149GB, so I add vps size (200GB) option when restore, as shown above.
Why must check mail? That's why I suggest Proxmox add a new field called as "Load Avg" on list of VPS.
I hope you understand my concern is feature suggestion of Proxmox not about how to use monit.
When Proxmox Load Average show 150 then I can not determine which VPS caused this high load since all VPS CPU < 90%.
Is this a way to show load of each VPS so I can see where the 150 come from?
NB: currently, I must use enter to each VPS then run #top command 1 by 1
Great Juan ... I just recheck that #top command reported right load, but WHM/CPanel (Home »System Health »Process Manager) still report node load instead :rolleyes:
# pveversion
pve-manager/5.4-13/aee6f0ec (running kernel: 4.15.18-18-pve)
#
# apt-get upgrade lxcfs
Reading package lists... Done
Building dependency tree
Reading state information... Done
lxcfs is already the newest version (3.0.3-pve1).
I have edited "ExecStart=/usr/bin/lxcfs -l...
~# smartctl -a /dev/nvme0n1
smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.15.18-12-pve] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Model Number: V-GEN03SM19EG1TP3X4IT
Serial Number...
If Proxmox using NVME hang (all icons grayed) then I do hard restart, then vps (in my case LXC) will back to several days ago.
So any websites in VPS will back to several days ago.
How to avoid this?
NB:
Proxmox 5.4.3
VPS created on secondary disk using NVME with ext4 file system
I have try Balloon with 5.000/10.000 RAM then stop/start VM but still overload displayed on Proxmox, ram starting 0.5 % then increase near to 92%.
Also has been trying with balloon min/max 10.000/10.000 but no luck.
KVM must reports RAM inside VM, this behavior works fine on WinXP, Win 2003, Win7 and Win2008.
Abnormal RAM report only happen on Win 2012 and Win 2016 (maybe Win 10 also), even I have installed Balloon Service & Qemu-Agent.
Even I already installed latest version of virtio-win iso (virtio-win-0.1.141.iso then virtio-win-0.1.171.iso) Proxmox still report RAM usage is 90%, even inside Windows only use 1GB.
Here is my settings:
Proxmox 5.3.8 dan guest Win 2016
RAM Ballon=0
HDD and NIC using VirtIO
Ballon Service...
No error on mount.
Since this VPS used for hosting then my last change is backup all files then move to new VPS, but I must prepare a long time because data size was nearly 80GB.
Is there a way Proxmox Backup ignore invalid files?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.