Search results

  1. E

    [SOLVED] High IO wait during backups after upgrading to Proxmox 7

    We recently completed the upgrade to Proxmox 7. The issue exists on two different kernels pveversion: pve-manager/7.2-7/d0dd0e85 (running kernel: 5.15.39-1-pve) and pve-manager/7.2-7/d0dd0e85 (running kernel: 5.15.35-2-pve) Since the upgrade Io wait has increased dramatically during vzdump...
  2. E

    Network issues with 5.4.114-1 kernel and openvswitch LAG

    We use openvswitch and tagged vlans. When rebooting into kernel 5.4.114-1 we starting having network issues SSH connections would break, live migrations failing. Eventually networking stopped entirely. Rebooted with kernel 5.4.106-1 and everything works fine again. Intel 10G network card Not...
  3. E

    dropped over-mtu packet: 1501 > 1500

    After recently upgrading to the latest version we started seeing these errors in the kernel on a few nodes. We are using openvswitch, the only thing I found using google that might explain the problem is this: https://lkml.org/lkml/2020/8/10/522 Before the update we were running kernel...
  4. E

    micron p420m SSD IO stall on Proxmox 6

    I have a p420m SSD that uses the mtip32xx driver in the kernel. This drive worked perfectly fine in Proxmox 5.x, after upgrading to 6.x write IO to the disk stalls frequently and can only be recovered with a reboot. We first experienced the problem within hours of upgrading to 6.x The...
  5. E

    5.x to 6.x Hybrid upgrade possible?

    During the upgrade process I have some nodes that I would like to reinstall rather then do a dist-upgrade to 6.x. Is it possible to do this?: Upgrade Corosync to new version in 5.x cluster Upgrade some but not all 5.x to 6.x using dist-upgrade Delete a 5.x node from the cluster Do fresh...
  6. E

    BUG: soft lockup

    Hello again everyone, been too long since my last post here. I have one server randomly locking up for over a month now, now a 2nd server is also having this problem. Unfortunately I've not captured all of the kernel messages that would help diagnose this but I have a couple screenshots from...
  7. E

    Increase performance with sched_autogroup_enabled=0

    Changing sched_autogroup_enabled from 1 to 0 makes a HUGE difference in performance on busy Proxmox hosts Also helps to modify sched_migration_cost_ns I've tested this on Proxmox 4.x and 5.x: echo 5000000 > /proc/sys/kernel/sched_migration_cost_ns echo 0 >...
  8. E

    General Protection Fault with ZFS

    This is reported upstream already by someone else, I added my info there too. https://github.com/zfsonlinux/zfs/issues/6781 I setup DRBD on top of a ZVOL. When making heavy sequential writes on the primary, the secondary node throws a General Protection Fault error from zfs. The IO was from a...
  9. E

    [SOLVED] Watchdog fence for physical nodes

    In Proxmox 3.x I setup fencing using apc pdus. I did not have any HA VMs setup but if one of the Proxmox nodes locked up or crashed the node would fenced. Is it possible to replicate this behavior In 4.x and 5.x? I'm fine with the watchdog as the method of fencing just don't see a way to make...
  10. E

    Live migration with local storage failing

    Is the online live migration with local storage supposed to work or are there still some known bugs to work out? Command to migrate: qm migrate 102 vm1 -online -with-local-disks -migration_type insecure Results in error at end of migration: drive-virtio0: transferred: 34361704448 bytes...
  11. E

    Bring back DRBD8 kernel module

    I have 38 DRBD volumes totaling around 50TB of useable storage across sixteen production nodes. We have held off upgrading to Proxmox 4.x in hopes DRBD9 and its storage plugin become stable and after a year I'm still waiting. I need to upgrade to 4.x but the non-production ready DRBD9 makes...
  12. E

    Storage model enhacement

    I was reading a thread where @wbumiller mentioned that using O_DIRECT with mdraid can result in inconsistent arrays. https://forum.proxmox.com/threads/proxmox-4-4-virtio_scsi-regression.31471/page-2#post-159574 On DRBD we have the same issue where some cache types can result in out of sync...
  13. E

    KVM crash during vzdump of virtio-scsi using CEPH

    When this problem happens the KVM process dies. Never had this problem untilI changed from virtio to virtio-scsi-single, also happened with virtio-scsi vm.conf: args: -D /var/log/pve/105.log boot: cd bootdisk: ide0 cores: 4 ide0: ceph_rbd:vm-105-disk-1,cache=writeback,size=512M ide2...
  14. E

    Proxmox is missing a 'Reboot' function

    It would be convenient if I could select "reboot" in Proxmox interface/API and Proxmox would issue a shutdown and then a start of the guest. When QEMU updates come out its necessary to shutdown the VM and start it back up so it can run under the updated code. (I suppose one could live migrate...
  15. E

    drbdmanage needs updated

    Proxmox is still providing 3 month old drbdmanage 0.91 when 0.94 was released just a couple weeks ago. With the large number of bugs in drbdmanage updating more frequently would be helpful for the few of us trying to use it.
  16. E

    DRBD9 live migration problems

    I've setup a 3 node DRBD cluster server names vm1, vm2 and vm3. I created DRBD storage with replication set to 2: drbd: drbd2 redundancy 2 content images,rootdir I created a DRBD disk for VM 110, The disk is created and is using servers VM1 and VM2...
  17. E

    Best Practice for NUMA?

    I have a few dual socket servers and want to know how best to configure VMs. Should VMs always have NUMA enabled and have CPU sockets set to the number of physical sockets? Some VMs only need a single socket and single core, these should have NUMA enabled too? Some VMs only need two cores, is...
  18. E

    When will 3.x become EOL?

    Debian wheezy will be supported under LTS from Feb 2016 to May 2018. https://wiki.debian.org/LTS Will Proxmox 3.x remain supported until May 2018? Myself, I'm not ready to jump into DRBD 9. Others are not ready to leave OpenVZ. We need to know how much time we have so we can prepare for the...
  19. E

    CEPH read performance

    7 mechanical disks in each node using xfs 3 nodes so 21 OSDs total I've started moving journals to SSD which is only helping write performance. CEPH nodes still running Proxmox 3.x I have client nodes running 4.x and 3.x, both have the same issue. Using 10G IPoIB, separate public/private...
  20. E

    Can we use lvm cache in Proxmox 4.x?

    Anyone tried setting up lvm cache? Its a fairly new cache tier based on dm-cache Theoretically we should be able to add an SSD cache to any logical volume that Proxmox has created for VM disks. It supports writethrough and writeback cache modes. With writethrough no data is lost if the cache...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!