When I do an resize on lvm partitions or anything runs against lvm.
If I dont then its fine but as soon as I try to make a change to lvm partition it dies. Tried it on few servers same issue on pve-kernel-5.4.44-2-pve.
I see these stuck:
root 5938 0.0 0.0 15848 8724 ? D 08:40 0:00 /sbin/vgs...
ok looks for me like still happening when I do an resize on lvm partitions or anything runs against lvm.
If I dont then its fine but as soon as I try to make a change to lvm partition it dies. Tried it on few servers same issue on pve-kernel-5.4.44-2-pve.
I see these stuck:
root 5938...
Ok we found that disabling our nagios monitoring script that monitors LVM usage caused this as it runs but never gets a result back and hence just stalls.
Strange only on last kernel this happened and this has been running on our servers for years.
But atleast I found a fix and will monitor...
Hi
I updated proxmox a few days ago and ever since getting weird errors and high load above 300+.
I noticed this today:
root 26559 0.0 0.0 15832 8720 ? D 08:48 0:00 lvs --noheadings --separator : -o lv_attr,lv_name,data_percent,metadata_percent
root 26635 0.0 0.0...
Same problem here. Just ran an update on all nodes last night and first servers to go down seem to be the Windows KVM ones. first it lost NFS connections not sure why. Then it the vms which use LVM started freezing. I stopped the vms but then could not start them as I got:
TASK ERROR: timeout...
Seems fine - I used Crucial SSDs before and they cant be the cause.
Could it be your network throughput? I assume the dedicated network is 1Gbps or even 10Gbps?
I checked our config and we have this in the /etc/pve/datacenter.cfg file:
cat /etc/pve/datacenter.cfg
keyboard: en-us
# use...
Strange.
Are you using SATA disks on the RAID 1 and what a is the model and make.
Secondly on iotop after making the change suggested above do you see it limiting bandwidth at 25MB/s
I have been noticing errors for example like this in containers /var/log/messages that the error should not be complaining on.
For example in container 230 this error shows in /var/log/messages
Apr 5 14:17:10 wit kernel: [2434869.384450] audit: type=1400 audit(1586089030.548:16540)...
That indicates you did not install the repo correctly as per :
Run below to edit sources file:
nano /etc/apt/sources.list
Then add this to bottom in the file above:
deb http://hwraid.le-vert.net/debian buster main
Exit and safe the file and run the following in console or ssh:
wget -O -...
We have sucessfully managed to create cPanel templates with everything required for easy setup for production use and it works very well.
Unfortunately however we have to keep editing the conf files in /etc/pve/lxc/CT.conf file constantly to add:
lxc.apparmor.profile: unconfined
is there a...
https://www.greennet.org.uk/support/exim-and-greylisting
deliver_queue_load_max = 20
should help
But I do think even though cpanel uses sysinfo its only for internally displaying in whm. I dont think its used for exim and the services as they use /proc/loadavg - correct me if I am wrong...
We had the same issue a few years back and I remember we fixed that by editing advanced exim editor and making a change for exims load trigger to a high value in the 100s. Count remember exactly where will post if I remember but maybe that will help you a bit
I get that but having downtime is not good for a web hosting provider like us and this caused load to go from low to a extrmely high value and then vm died.
Thanks however for the advice on looking at the manpage/ documentation to change some values.
Yip seems I am right. Tested it first twice. Froze the VM too long and had to stop and start to get it up
Had to disable it by doing:
systemctl stop qemu-guest-agent
systemctl disable qemu-guest-agent
and reran backup and it never froze once. So all is good and stable now again. We cannot...
When backing up a server we found that server feezes just when this is invoked and load goes from 0.8 avg to around 700+
Nov 30 12:18:01 cp-6 qemu-ga: info: guest-ping called
Nov 30 12:18:01 cp-6 qemu-ga: info: guest-fsfreeze called
Nov 30 12:18:01 cp-6 qemu-ga: info: executing fsfreeze hook...
Would be nice to find abusers if you offering VPS Services to customers.
But until its implemented or thought of as an option to implement note you guys can use something like:
Virtualizor
Modulesgarden Proxmox Addon(If using WHMCS)
We using Virtualizor currently and not sure how they...
I think I may have spotted the error. Well in our case if we migrate or move vms with virtio scsi disks attached we get errors like below:
create full clone of drive scsi1 (nfs-server:114/vm-114-disk-1.raw)
WARNING: You have not turned on protection against thin pools running out of space...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.