Yes, i had the same Problem with ovftool export. For me using sshfs + eatmydata heavily improved speed. You just need to enable ssh on the esxi and then:
$ ssh root@<proxmoxnode>
# sshfs root@<esxinode>:/vmfs/volumes /mnt/tmp
# cd /mnt/tmp/<datastore>/<vmname>/
# eatmydata ovftool...
I've disabled firewall on interface level: this reduced the lag. Thats a bit weird, because firewall was already disabled at datacenter and node level.
We've upgraded PVE from 7 to 8 and also the Kernel from 6.2 to 6.5.11-6-pve: since these changes, the CPU-usage of the kvm-process is much higher then before. Before the upgrade the idle process usage was ~2%, and never above ~50% now after the upgrade the idle cpu usage is at ~20% and it goes...
This thread is about lockdown and it works with kernel 6.5:
# cat /proc/version /sys/kernel/security/lsm
Linux version 6.5.11-6-pve (build@proxmox) (gcc (Debian 12.2.0-14) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40) #1 SMP PREEMPT_DYNAMIC PMX 6.5.11-6 (2023-11-29T08:32Z)...
I wanted to calculate the fingerprint of the encrypton key in python, thats what works for me:
import base64
import hashlib
import hmac
import json
import sys
def get_fingerprint(encryptionkey):
b = base64.b64decode(encryptionkey)
id_key = hashlib.pbkdf2_hmac('sha256', b...
I'm pretty sure the problem depends on the gateway / switch. As the gateway is configured by my provider idk whats configured there.
https://pve.proxmox.com/wiki/Multicast_notes#Disabling_IGMP_Snooping_.28not_recommended.29 :
"Snooping should be enabled on either the router / switch or on the...
dhcp-v6
no.
firewall is disabled.
no need to test as the firewall is disabled. Disabling multicast_snooping clearly fixed the problem: before every 3-5h the connection was lost.
it seems the linux kernel shipped with PVE doesn't have lockdown support:
$ cat /sys/kernel/security/lsm
capability,yama,apparmor
(output of pve-kernel-6.2.9-1-pve)
is there a reason why its disabled at compile time?
i couldn't find any info about it, the default ubuntu kernel seems to...
sadly no. downgraded all nodes to "Linux version 5.15.30-2-pve": still the same hang after migration the vms between the hosts.
i guess next should be to downgrade pve-qemu-kvm?!
possible the same problem:
https://bugzilla.proxmox.com/show_bug.cgi?id=4073 or https://forum.proxmox.com/threads/vm-hang-after-live-migration-since-upgrade-to-pve-7-2-with-bug-soft-lockup.109754/
i don't understand what you mean. our physical host runs flawless, only the vm hangs for us. sounds like a different problem.
if so: please create a new thread.
since the upgrade to pve 7.2 linux vms hang after live migration, see the screenshot. the vm is pingable but ssh login doesn't work any more.
an example vm config:
agent: 1,fstrim_cloned_disks=1
balloon: 16000
boot: order=scsi0
cores: 14
machine: q35
memory: 12000
name: serverhang
net0...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.