Search results

  1. U

    [SOLVED] VMs freeze with 100% CPU

    Hi, for some time now VMs with 100% CPU have been freezing. Unfortunately, this has been happening more and more lately. Shutdown/console no longer works for this VMs - only powerdown and restart. I've search the forum before but only found threads about this which are using an special intel cpu...
  2. U

    Live migration to host with 5.15 kernel (pve7.2) can kill all VMs on this node

    Hi, the IO-issue like I had in https://forum.proxmox.com/threads/io-trouble-wit-zfs-mirror-on-pve7-2-5-15-39-1-pve-bug-soft-lockup-inside-vms.113373/ isn't fixed with vm-disk parameter aio=thread… Today I migrate an VM live to an Mode with two disks (25G + 75G) and after that, many (all?) VMs on...
  3. U

    Question about Async IO

    Hi, due to the Issue in https://forum.proxmox.com/threads/io-trouble-wit-zfs-mirror-on-pve7-2-5-15-39-1-pve-bug-soft-lockup-inside-vms.113373/ (two new AMD-Hosts are not realy usable!) I would switch the Async IO to native as default (hope it's help). But the man page of datacenter.cfg don't...
  4. U

    [SOLVED] IO-Trouble wit zfs-mirror on pve7.2 (5.15.39-1-pve) - BUG: soft lockup inside VMs

    Hi, last week I moved VMs to an fresh installed new cluster-node, but it's don't run well. Most of the VMs has massive trouble to do IO. After migrating all VM-Disks to ceph (local-zfs before) the VMs work. The big quesiton is, where are the issue? Host-Kernel? The system is an Dell R6515 with...
  5. U

    [SOLVED] Ceph Update Nautilus -> Octopus on pve6.1 in preparation to pve7 upgrade

    Hi, we would update our pve-cluster to pve7 and like to reduce the reboots to one for each node. Ceph is allready on 14.2.22. The howto say: "We assume that all nodes are on the latest Proxmox VE 6.3 (or higher)", which isn't on most nodes in our cluster. Is it still posible to upgrade ceph to...
  6. U

    Intel quad port X710 10GbE SFP+ not visible after upgrade today (but working now)

    Hi, I had an strange effect today. On an new system (supermicro amd server) with an Intel quad port X710 the NIC don't appear after the updates from today (but the updates has nothing to do with kernel/firmware?!) Commandline: apt dist-upgrade Install: libyaml-libyaml-perl:amd64...
  7. U

    Experiences with online storage migration of huge volumes?

    Hi all, I want to migrate some big VM-Volumes from an internal raid to an FC-Raid (LVM) online. And big mean realy big - up to 10TB. The destination raid should be much faster than the source-storage. Especially for the biggest volume, which are on an slow iscsi-raid (LVM) now. Has anybody...
  8. U

    [SOLVED] Console-Access (Authentication failed) during cluster upgrade

    Hi, I'm have two cluster where I upgrade the nodes step by step. If I try to open an console from an VM on another clusternode (not where I logged in), I got an Authentication failed. The console is working fine on nodes where pve5 is running. Logged in -> open console on pve5 -> pve5 =...
  9. U

    [SOLVED] Howto add a VM to an pool with the API?

    Hi, I want simply add an new created VM with pvesh to an pool (like "pvesh add pools/Dev -members 123"). OK, pvesh don't know add - but my try with set are not successfull. How is the right syntax? Udo
  10. U

    Trouble with bnx2 after upgrade to pve 6 due config issue inside VM

    Hi, I updated yesterday an two-node cluster from 5.4 to 6.0-5 (pve-no-subscription). They are only one VM on this cluster, which are running before the upgrade (live migrate to the updatred second node and later live migrate back). Due an configuration error, the second VM-nic was tagged in the...
  11. U

    Performancetest (zfs) between pve5.4 + pve6.0

    Hi, I do yesterday an short performance test between pve5.4 + pve6.0 mainly to see, if the zfs performace are better, because we have some trouble with mysql-vms on zfs (ssd zfs raid1). Test: hardware: Dell R610 with 16GB Ram + HT on - 16 x Intel(R) Xeon(R) CPU X5560 @ 2.80GHz (2 Sockets) 2 *...
  12. U

    max cluster nodes with pve6?

    Hi, with pve 6 the new corosync version is used. Are there any changes for the amount of cluster nodes in one cluster? If I remember right, for now is the limit 32 nodes, but less are recommendet (amount?). Udo
  13. U

    Why kernel are still named 4.15.18-11-pve

    Hi, yesterday I updated an cluster and today there are an new kernel available (enterprise repo). The changelog shows important bugfixes, but the name ist still the same! Today before update: pveversion -v proxmox-ve: 5.3-1 (running kernel: 4.15.18-11-pve) pve-manager: 5.3-9 (running version...
  14. U

    Bug with disconnected VM-Interface

    Hi, find today an ugly bug. I had an cloned VM with the same network settings. Both VMs are running, but one with disconnected network interface. I would change the mac-address and then the IP inside the VM. After editing the mac-address, the settings are a short time red, then black and...
  15. U

    Access zfs-snapshots inside lxc container

    Hi, I've tested zfs-snapshots with samba and shadow_copy2 to access old file versions which are created with zfs-snapshots. This work well on the host, but I don't want to use samba on the host directly. But if I snapshot an container mount point, I can see inside the container the snapshots...
  16. U

    [SOLVED] Ceph upgrade to 12.2.10 hang

    Hi, just tried an upgrade on the first node and the process hang, without activity. root@pve01:~# apt dist-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done The following packages will be upgraded: ceph...
  17. U

    creation of ceph-mon ignore public_network

    Hi, I wan't to fill an bug on https://bugzilla.proxmox.com but don't find the right section for ceph (pve-common/pve-manager?). If you now create an new monitor, the mon are named like mon.pve01 instead of mon.0, but the public_network will ignored, so the mon section in ceph.conf use the IP...
  18. U

    drop_caches don't finisched on pveperf

    Hi, I have on an server the effect, that dorp_caches don't finisched during an pveperf run. ps aux | grep echo root 835293 99.9 0.0 4292 708 pts/3 R+ Jul20 1140:50 sh -c echo 3 > /proc/sys/vm/drop_caches There run an VM with IO, but the load is quite low and the underlaying...
  19. U

    What's about ceph luminous 12.2.7?

    Hi, I'm lucky, that I don't use EC-Pools with ceph 12.2.5, but perhaps other use it?! Do you have an eta of the important bug-fix to 12.2.7? Udo
  20. U

    pve-kernel-4.15.18-1-pve + IXGBE driver

    Hi, I have installed last week two new nodes on an cluster with the kernel 4.15.17-3-pve. The ixgbe driver is in use. This evening I saw the update to 4.15.18-1 with the note "drop out-of-tree IXGBE driver". Must I worry about the nodes, which running the 4.15.17-3 kernel? The hosts are in...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!