I have found this thread with a similar problem:
https://forum.proxmox.com/threads/cant-update-from-4-4-to-5-1-due-to-libpve-common-perl.43458/page-2
However the command @fabian suggested does not show any packages held:
# apt-mark showhold
#
Anybody seen this?
I am upgrading our cluster, node by node from PVE 4.4 to 5 following the wiki:
https://pve.proxmox.com/wiki/Upgrade_from_4.x_to_5.0
Several nodes upgraded perfectly, however on one node I get the following errors:
# apt-get dist-upgrade
Reading package lists... Done
Building dependency tree...
Yes, this problem is still massively affecting many Proxmox users, unfortunately no one has any idea what could be causing it. Actually the Proxmox developers didn't even acknowledge so far that this bug exists (my bugreport is still in NEW status), so I wouldn't hold my breath that this gets...
The issue described in this thread (KVM processes and kernel freezing while host is doing heavy disk or network IO) is somewhat mitigated if the vm.dirty_ratio is set to the minimum, like 2 or 1. Any value above that causes the error to appear very frequently.
Limiting the ZFS ARC size has...
Are you on ZFS? Is your system low on free memory? If your answer to both questions is yes, then your spontaneous reboot can be prevented by the following two steps:
1. ZFS ARC size
We aggressively limit the ZFS ARC size, as it has led to several spontaneous reboots in the past when left...
This is very interesting news, would love to test. Can you point me to the particular patch about this issue? Is it in ZFS or in PVE?
So I suppose this is in 5.x pvetest? Also, is there going to be a systemwide, general maximum bandwidth setting for these operations, or can only be set via GUI...
Are there any news on limiting the bandwidth of restore and migrate operations?
Due to the KVM CPU freeze bug in the kernel I reported (and posted about several times), heavy disk writes not only slow down other VMs disk IO, but their network IO as well due to the guest CPU freezing, rendering...
According to the articles below, ZFS on Linux 0.7.7 has a disappearing file bug, and it is not recommended to be installed in a production environment:
https://www.servethehome.com/zfs-on-linux-0-7-7-disappearing-file-bug/
https://news.ycombinator.com/item?id=16797932
My test Proxmox box that's...
This keeps happening every few days on a single cpu Sandy Bridge box, running 3 Windows VMs on Proxmox 4.4. Can someone help to understand what's happening?
Mar 28 19:42:55 proxmox6 kernel: [133407.284601] general protection fault: 0000 [#1] SMP
Mar 28 19:42:55 proxmox6 kernel: [133407.284628]...
I was looking for advice on a technical level... how would you do that, if you had to connect 5-6 nodes with dual port 10gbe interfaces without a switch.
No, it can not. If you actually read the text carefully that you linked in your post (instead of parroting misinformation), then you would know it was only possible with an outdated Debian kernel. So in the case of currently supported Proxmox VE kernels, no side channel attack can read the host...
We have installed the patch on a few servers last night (dual socket Westmere Xeons, single and dual socket Sandy Bridge and Ivy Bridge Xeons), all servers booted without any problems. There are no obvious performance regressions, all LXC containers and KVM guests operate within the same CPU...
The problem was always less serious with backups, especially if you applied the tweaks I posted above... some VMs were more susceptible (Debian 7), some were not at all (Ubuntu 14/15/16.04, Debian 9). But the real problem was always with restores and migrations: try to restore (or migrate) some...
My thoughts exactly!
We are indeed experiencing this issue since PVE 3.x, when our VM disks were stored on ext4+LVM. We are currently running Proxmox 4 with ZFS local storage, but some others have experienced it over NFS, so the issue is most likely unrelated to the storage backend, rather...
There is absolutely no problem with swap on ZFS if you modify these settings. Most important is disabling ARC caching back the swap volume, but the other tweaks are important as well (and endorsed by the ZFS on Linux community):
https://github.com/zfsonlinux/zfs/wiki/FAQ
Execute these commands...
What is your reason behind virtualization if you only run one VM? Easy backup? Portability to another host? Because IO performance will be MUCH lower compared to a bare metal server.
- Of your 10 core CPU I would give 8 cores to the VM, and leave 2 for the host for stable performance even if...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.