Hi
After upgrading proxmox 6 to 7 (which seems it wasn't fully upgraded), i can start only LXC containers but VMS i can't start, this is the output:
generating cloud-init ISO
qemu-img: symbol lookup error: qemu-img: undefined symbol: rbd_aio_write_zeroes
TASK ERROR: command 'set -o pipefail &&...
We have a cluster of proxmox servers (pve-manager/7.3-3)
Ocassionaly one of the servers freezes and restarts with this message in syslog:
kernel: i40e 0000:60:00.1: Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF
it repeates itself a lot of times then reboots,
how can i solve...
Hi
We have an issue of extermely high load
after disconnecting and NFS storage which had a few cloudinit config drives still attached to it,
even after removing the cloudinit drives the load remains , trying to set the drives to another active NFS storage didn't reduce the load
also...
While booting Live CD or OS from qcow2 image ,
We get a kernel panic error with the following error:
Fatal trap 12: page fault while in kernel mode
attached below screen capture of the console
also a video capture:
https://www.screencast.com/t/lVfGR9lhpT
anyway to bypass the error?
Thanks
Hi
We have a big problem with our proxmox cluster
We are having a lot of errors in syslog for
below is what we get for example in some of the log , it started with a deletion of qcow2 snapshot this time , tough we get it randomly on all updated Proxmox hosts
each updated hosts gets a...
Hi
We are using our Proxmox with a storage that connects over fabric using Mellanox driver
We also use Nvmesh as our storage management software therefore we can see the volume as local
and we are using it as a LOCAL ZFS file system , the problem is that proxmox doesn't see it as shared...
Hi
I got an issue with a specific VM that freezes while snapshot is running ,and does not come back online , the file system is just freezing without getting back online
this is the log from the task itself:
guest-fsfreeze-freeze problems - VM xxx qmp command 'guest-fsfreeze-freeze' failed -...
Hi
We have a setup of ZFS over ISCSI using LIO on Ubuntu 18 , and we have an issue with high IO load once we move disks that are bigger than 100GB,
once the move starts the Load is low until about a half of the transfer is done, and then it's getting crazy high,
our setup is very high end ...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.