There seems to be an issue with the current ZFS driver.
Receiving on encrypted datasets seems to trigger a null pointer dereference and a lockup that requires hard reset of the node.
My proxmox servers are affected.
It seems there's a data...
I'm having issues moving CT root fs from local storage (zfs) to ceph rbd.
The problem only occurs on existing CT, created some time ago. At that time I was using proxmox 4.x
The error does not occur on new CT created with proxmox 6.2, using ubuntu 20.x template.
EDIT: It seems the issue is due...
I once used an Intel Modular Server (MFSYS) in one of my deployments of proxmox (1.5!) and that was a nice little blade setup. That system is now 8 years old (running the latest proxmox 5) but needs replacement.
Is there a similar blade setup to be bought today ? I fail to find a...
I'm experiencing a problem caused by disk quota exhaustion on a container.
Simply put, when the quota is reached inside a container, further writes are be blocked, however, the writing process is not terminated but hangs indefinitely.
When this happens, the load average on the physical...
I have a few proxmox nodes running in a cluster, with the cluster on a separate network interface.
If a cluster node looses connection to this interface temporarily (like when a cable is disconnected and reconnected), the node will never see the other nodes until reboot.
I tried issuing...
I am having stability issues with LXC containers after migration from OpenVZ
What happens is that when the memory is all used in a container, the OOM kicks in and kills processes (see attached example).
If I try to restart a killed process it will usually fail. A reboot of the container is...
There's an issue with the latest proxmox 4.1. It will not reboot the servers (my servers at least), but will just stay in the "shutdown" state:
Server has reached shutdown state.
proxmox-ve: 4.1-39 (running kernel: 4.2.8-1-pve)
With OpenVZ containers, we had two-step migration process, where on step 1 there was an initial rsync with a running container. Then the container was stopped, rsynched again and then started on the new node.
This would shorted migration downtime by an order of magniture on big containers...
I have recently upgraded a cluster from 3.4 to 4.1
There's a security issue with LXC that I would like to bring to your attention.
Running dmesg inside a CT will show you the base server information. In some cases this reveals process info from other containers.
I would not expect this to be...
Which way of installing a sheepdog cluster is correct:
https://pve.proxmox.com/wiki/Sheepdog_cluster_install (manual) or
And which is the recommended way? I would prefer the latest source version from github and it's still not very...
There's a bug (bad feature) when migrating LXC containers hosted on ZFS: It looses the snapshots.
Longer explanation: I snapshot routinely all LXC containers for backup and replication. This is "a goog thing" and saved my azz a few times over the years.
I discovered that proxmox will migrate...
I think I have a problem with ZFS performance, which is much below what I see advertised on the forum and considering the hardware I'm using. Unfortunately I cannot see the issue, so I hope that someone will be smarter than me.
The problem is the IOPS I can get from a ZFS pool with 6...
I have a question regarding the new lxc containers in 4.0.
I would like to be able to run separate processes inside a LXC container, in their own cgroups, so that a single process cannot take down a container (or the server for that matter).
On bare metal this is done using cgroups. On...
I'm in a situation where I need some of the ACPI tables of the physical server to be exposed to the guest (for licensing purposes). I know about the SLIC patch for seabios, but it does not apply here.
I have a media for windows 2012 server (HP) which refuses to boot on non-hp hardware...
I've been trying to get proxmox 3.4 to boot from a native ZFS root, to no avail. I will boot from ext3 partition, but not ZFS. I don't see the grub prompt at all. Just a cursor and the server resets itself and starts the boot procedure over and over.
This is an HP380 Gen9 using an HP 440ar...
I am experiencing occasional errors during live migration of openvz containers.
Some times the container will fail to un-suspend after it has been copied to the new node.
The errors I see in the kernel logs are:
CPT ERR: ffff88072f840000,5102 :rst_file: failed to fix up file content: -22
I'm having issues with disk corruption for glusterfs hosted kvm images.
Simply, if a VM is running and I reboot one of the replicas holding the VM disk image, the running VM will start throwing disk errors and eventually die. On a reboot, I see disk corruption.
I'm using cache=writetrough...