We recently completed the upgrade to Proxmox 7.
The issue exists on two different kernels
pveversion:
pve-manager/7.2-7/d0dd0e85 (running kernel: 5.15.39-1-pve)
and
pve-manager/7.2-7/d0dd0e85 (running kernel: 5.15.35-2-pve)
Since the upgrade Io wait has increased dramatically during vzdump...
We use openvswitch and tagged vlans.
When rebooting into kernel 5.4.114-1 we starting having network issues
SSH connections would break, live migrations failing.
Eventually networking stopped entirely.
Rebooted with kernel 5.4.106-1 and everything works fine again.
Intel 10G network card
Not...
After recently upgrading to the latest version we started seeing these errors in the kernel on a few nodes.
We are using openvswitch, the only thing I found using google that might explain the problem is this:
https://lkml.org/lkml/2020/8/10/522
Before the update we were running kernel...
I have a p420m SSD that uses the mtip32xx driver in the kernel.
This drive worked perfectly fine in Proxmox 5.x, after upgrading to 6.x write IO to the disk stalls frequently and can only be recovered with a reboot.
We first experienced the problem within hours of upgrading to 6.x
The...
During the upgrade process I have some nodes that I would like to reinstall rather then do a dist-upgrade to 6.x.
Is it possible to do this?:
Upgrade Corosync to new version in 5.x cluster
Upgrade some but not all 5.x to 6.x using dist-upgrade
Delete a 5.x node from the cluster
Do fresh...
Hello again everyone, been too long since my last post here.
I have one server randomly locking up for over a month now, now a 2nd server is also having this problem.
Unfortunately I've not captured all of the kernel messages that would help diagnose this but I have a couple screenshots from...
Changing sched_autogroup_enabled from 1 to 0 makes a HUGE difference in performance on busy Proxmox hosts
Also helps to modify sched_migration_cost_ns
I've tested this on Proxmox 4.x and 5.x:
echo 5000000 > /proc/sys/kernel/sched_migration_cost_ns
echo 0 >...
This is reported upstream already by someone else, I added my info there too.
https://github.com/zfsonlinux/zfs/issues/6781
I setup DRBD on top of a ZVOL.
When making heavy sequential writes on the primary, the secondary node throws a General Protection Fault error from zfs.
The IO was from a...
In Proxmox 3.x I setup fencing using apc pdus.
I did not have any HA VMs setup but if one of the Proxmox nodes locked up or crashed the node would fenced.
Is it possible to replicate this behavior In 4.x and 5.x?
I'm fine with the watchdog as the method of fencing just don't see a way to make...
Is the online live migration with local storage supposed to work or are there still some known bugs to work out?
Command to migrate:
qm migrate 102 vm1 -online -with-local-disks -migration_type insecure
Results in error at end of migration:
drive-virtio0: transferred: 34361704448 bytes...
I have 38 DRBD volumes totaling around 50TB of useable storage across sixteen production nodes.
We have held off upgrading to Proxmox 4.x in hopes DRBD9 and its storage plugin become stable and after a year I'm still waiting.
I need to upgrade to 4.x but the non-production ready DRBD9 makes...
I was reading a thread where @wbumiller mentioned that using O_DIRECT with mdraid can result in inconsistent arrays. https://forum.proxmox.com/threads/proxmox-4-4-virtio_scsi-regression.31471/page-2#post-159574
On DRBD we have the same issue where some cache types can result in out of sync...
When this problem happens the KVM process dies.
Never had this problem untilI changed from virtio to virtio-scsi-single, also happened with virtio-scsi
vm.conf:
args: -D /var/log/pve/105.log
boot: cd
bootdisk: ide0
cores: 4
ide0: ceph_rbd:vm-105-disk-1,cache=writeback,size=512M
ide2...
It would be convenient if I could select "reboot" in Proxmox interface/API and Proxmox would issue a shutdown and then a start of the guest.
When QEMU updates come out its necessary to shutdown the VM and start it back up so it can run under the updated code. (I suppose one could live migrate...
Proxmox is still providing 3 month old drbdmanage 0.91 when 0.94 was released just a couple weeks ago.
With the large number of bugs in drbdmanage updating more frequently would be helpful for the few of us trying to use it.
I've setup a 3 node DRBD cluster server names vm1, vm2 and vm3.
I created DRBD storage with replication set to 2:
drbd: drbd2
redundancy 2
content images,rootdir
I created a DRBD disk for VM 110, The disk is created and is using servers VM1 and VM2...
I have a few dual socket servers and want to know how best to configure VMs.
Should VMs always have NUMA enabled and have CPU sockets set to the number of physical sockets?
Some VMs only need a single socket and single core, these should have NUMA enabled too?
Some VMs only need two cores, is...
Debian wheezy will be supported under LTS from Feb 2016 to May 2018.
https://wiki.debian.org/LTS
Will Proxmox 3.x remain supported until May 2018?
Myself, I'm not ready to jump into DRBD 9.
Others are not ready to leave OpenVZ.
We need to know how much time we have so we can prepare for the...
7 mechanical disks in each node using xfs
3 nodes so 21 OSDs total
I've started moving journals to SSD which is only helping write performance.
CEPH nodes still running Proxmox 3.x
I have client nodes running 4.x and 3.x, both have the same issue.
Using 10G IPoIB, separate public/private...
Anyone tried setting up lvm cache? Its a fairly new cache tier based on dm-cache
Theoretically we should be able to add an SSD cache to any logical volume that Proxmox has created for VM disks.
It supports writethrough and writeback cache modes.
With writethrough no data is lost if the cache...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.