Search results

  1. G

    PVE 4.4 to 5.x upgrade problem

    I have found this thread with a similar problem: https://forum.proxmox.com/threads/cant-update-from-4-4-to-5-1-due-to-libpve-common-perl.43458/page-2 However the command @fabian suggested does not show any packages held: # apt-mark showhold # Anybody seen this?
  2. G

    PVE 4.4 to 5.x upgrade problem

    I am upgrading our cluster, node by node from PVE 4.4 to 5 following the wiki: https://pve.proxmox.com/wiki/Upgrade_from_4.x_to_5.0 Several nodes upgraded perfectly, however on one node I get the following errors: # apt-get dist-upgrade Reading package lists... Done Building dependency tree...
  3. G

    KVM guests freeze (hung tasks) during backup/restore/migrate

    Yes, this problem is still massively affecting many Proxmox users, unfortunately no one has any idea what could be causing it. Actually the Proxmox developers didn't even acknowledge so far that this bug exists (my bugreport is still in NEW status), so I wouldn't hold my breath that this gets...
  4. G

    Frequent CPU stalls in KVM guests during high IO on host

    The issue described in this thread (KVM processes and kernel freezing while host is doing heavy disk or network IO) is somewhat mitigated if the vm.dirty_ratio is set to the minimum, like 2 or 1. Any value above that causes the error to appear very frequently. Limiting the ZFS ARC size has...
  5. G

    Proxmox: Server reboots randomly

    Are you on ZFS? Is your system low on free memory? If your answer to both questions is yes, then your spontaneous reboot can be prevented by the following two steps: 1. ZFS ARC size We aggressively limit the ZFS ARC size, as it has led to several spontaneous reboots in the past when left...
  6. G

    Is it possible to throttle backup and restore disk io?

    This is very interesting news, would love to test. Can you point me to the particular patch about this issue? Is it in ZFS or in PVE? So I suppose this is in 5.x pvetest? Also, is there going to be a systemwide, general maximum bandwidth setting for these operations, or can only be set via GUI...
  7. G

    Is it possible to throttle backup and restore disk io?

    Are there any news on limiting the bandwidth of restore and migrate operations? Due to the KVM CPU freeze bug in the kernel I reported (and posted about several times), heavy disk writes not only slow down other VMs disk IO, but their network IO as well due to the guest CPU freezing, rendering...
  8. G

    ZFS 0.7.7 may cause data loss, Proxmox 5.1 just updated to it!

    Sorry @tom , you are right of course, missed the discussion. Please delete the thread if possible.
  9. G

    ZFS 0.7.7 may cause data loss, Proxmox 5.1 just updated to it!

    According to the articles below, ZFS on Linux 0.7.7 has a disappearing file bug, and it is not recommended to be installed in a production environment: https://www.servethehome.com/zfs-on-linux-0-7-7-disappearing-file-bug/ https://news.ycombinator.com/item?id=16797932 My test Proxmox box that's...
  10. G

    general protection fault: 0000

    This keeps happening every few days on a single cpu Sandy Bridge box, running 3 Windows VMs on Proxmox 4.4. Can someone help to understand what's happening? Mar 28 19:42:55 proxmox6 kernel: [133407.284601] general protection fault: 0000 [#1] SMP Mar 28 19:42:55 proxmox6 kernel: [133407.284628]...
  11. G

    3-Node Cluster Setup with non-switched interfaces

    What about a routed (OSPF) ring topology? You theorized about connecting your servers in a ring fashion...
  12. G

    3-Node Cluster Setup with non-switched interfaces

    I was looking for advice on a technical level... how would you do that, if you had to connect 5-6 nodes with dual port 10gbe interfaces without a switch.
  13. G

    3-Node Cluster Setup with non-switched interfaces

    So how would you connect 5-6 or more nodes without a switch? Full mesh network will not work in that case...
  14. G

    Meltdown and Spectre Linux Kernel fixes

    No, it can not. If you actually read the text carefully that you linked in your post (instead of parroting misinformation), then you would know it was only possible with an outdated Debian kernel. So in the case of currently supported Proxmox VE kernels, no side channel attack can read the host...
  15. G

    Meltdown and Spectre Linux Kernel fixes

    We have installed the patch on a few servers last night (dual socket Westmere Xeons, single and dual socket Sandy Bridge and Ivy Bridge Xeons), all servers booted without any problems. There are no obvious performance regressions, all LXC containers and KVM guests operate within the same CPU...
  16. G

    VM blocked due to hung_task_timeout_secs

    The problem was always less serious with backups, especially if you applied the tweaks I posted above... some VMs were more susceptible (Debian 7), some were not at all (Ubuntu 14/15/16.04, Debian 9). But the real problem was always with restores and migrations: try to restore (or migrate) some...
  17. G

    ceph : [client] rbd cache = true override qemu cache=none|writeback

    Is this still true? rbd_cache only active if cache=none is set for the disk of the KVM guest?
  18. G

    > 3000 mSec Ping and packet drops with VirtIO under load

    My thoughts exactly! We are indeed experiencing this issue since PVE 3.x, when our VM disks were stored on ext4+LVM. We are currently running Proxmox 4 with ZFS local storage, but some others have experienced it over NFS, so the issue is most likely unrelated to the storage backend, rather...
  19. G

    proxmox 5 - change from ZFS RPOOL/SWAP to standard linux swap partition

    There is absolutely no problem with swap on ZFS if you modify these settings. Most important is disabling ARC caching back the swap volume, but the other tweaks are important as well (and endorsed by the ZFS on Linux community): https://github.com/zfsonlinux/zfs/wiki/FAQ Execute these commands...
  20. G

    Reccomended Settings for a single VM

    What is your reason behind virtualization if you only run one VM? Easy backup? Portability to another host? Because IO performance will be MUCH lower compared to a bare metal server. - Of your 10 core CPU I would give 8 cores to the VM, and leave 2 for the host for stable performance even if...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!