I just posted this into a new kernel freeze thread... we hade sporadic kernel freezes for a long time (happening every few days to every few weeks) on many servers going back to Proxmox 6 and then 7, using several different kernel versions. We tried many things, but the one thing that ended all...
It may be totally unrelated, but we hade sporadic kernel freezes for a long time (happening every few days to every few weeks) on many servers since Proxmox 6 and then 7, using several different kernel versions. We tried many things, but the one thing that ended all freezes was disabling X2APIC...
Those are old mitigation tips when we did not know the real cause of the problem, but they do not hurt, especially the ZFS settings, they are very useful.
The most important 3 steps to solve this almost perfectly are:
1. Using VirtIO-SCSI-Single disk controller for your VM disks
2. Enabling...
This is not entirely true. As newer processors are fully compatible with older models, you can easily and safely migrate guests using an older CPU model set in KVM. For example if you have Ivy Bridge, Haswell and Skylake servers, you should set your KVM guests to the lowest common denominator...
So if I have a NUMA server with 2 sockets, and I want a VM to run on (and use the memory of) a single socket (NUMA node) only, then I should disable the NUMA option and the hypervisor will make sure it will stay on one NUMA node?
The wiki says the exact opposite...
Upon upgrading (reinstalling one-by-one) our cluster to PVE 7.0, I ran into the following problem: restoring an LXC container shows an error.
recovering backed-up configuration from 'NFS:backup/vzdump-lxc-321-2021_11_12-04_38_58.tar.lzo'
restoring...
I have recently installed four NVMe SSDs in a Proxmox 6 server as a RAIDZ array, only to discover that according to the web interface two of the drives exhibit huge wearout only after a few weeks of use:
Since these are among the highest endurance consumer SSDs with 1665 TBW warranty for a...
This looks related to CPU freeze issue that many of us are experiencing when backing up or restoring VMs, that is unresolved for many years:
https://forum.proxmox.com/threads/kvm-guests-freeze-hung-tasks-during-backup-restore-migrate.34362
Here is my newly reopened bugreport...
Good point @mmenaz ! Unfortunately none of the new features solved this problem completely, also his observation about this being a load issue is superficial in light of the facts.
I have reopened the bugreport and commented our latest experience. I encourage everyone to do the same.
Read my previous posts... this is a several years old bug in the kernel scheduler or KVM (or both), and nobody cares about solving it. Proxmox devs killed my bugreport, KVM devs do not even acknowledge the bugreport.
What happens is that during heavy IO load on the host (disk or network), KVM...
This looks exactly like the problem that I (and many others) reported for years in this thread and others. The problem is most likely a Linux kernel / KVM issue.
What seems to happen is when local disks are busy due to a restore operation, probably the dirty page IO of the host system is...
Do you boot from ZFS on your server that reboots without any warning?
Also, what is your software+hardware configuration? (processor, memory, disks, filesystem, swap, etc.)
I have tried to install Ubuntu 18.04 LTS Server in a KVM machine on a recently updated Proxmox 5.4 host, but after entering the IPv4 address manually, and hitting "Save" the installer instantly restarts. Tried different hardware configurations (E1000 vs. VirtIO-net, disconnecting the adapter...
When creating a regular (RBD) Ceph pool, there are options in both the GUI and in pveceph to determine the size (replication count) and the min. size (online replicas for read) of the pool. However, when creating a CephFS pool, neither the GUI, not pveceph provides an option to create one with a...
Let's say my username is "pveuser@pve". If I query ACCESS/USERS, I get all the user data I'm allowed to see, among it my own, but ACCESS/USERS/PVEUSER@PVE gives a 403 Forbidden error.
Problem is I can't GET (or POST) ACCESS/USERS/PVEUSER@PVE to read (or write) my own data, unless I have the...
Hi Morbious, can you please post some more info?
- host HW configuration (CPU and memory type and amount, how many NVMe disks)
- host SW configuration (kernel versions on both machines, storage type and filesystem used), how many VMs
- VM configs of the VMs you get errors on (cpu number, memory...
Unfortunately the Proxmox developers @tom @fabian @dcsapak never acknowledged this as a problem or bug on it's own, rather they try to blame hardware or software configuration, and they even killed the bugreport I filed that started to accumulate many user reports...
It looks like the problem got solved after I restarted all the Proxmox services:
service corosync restart
/etc/init.d/pve-cluster restart
/etc/init.d/pvedaemon restart
/etc/init.d/pvestatd restart
/etc/init.d/pveproxy restart
After another apt-get update, apt-get dist-upgrade works again...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.