Search results

  1. ZFS issue receving on encrypted data set

    There seems to be an issue with the current ZFS driver. Receiving on encrypted datasets seems to trigger a null pointer dereference and a lockup that requires hard reset of the node. https://github.com/openzfs/zfs/issues/11679 My proxmox servers are affected. It seems there's a data...
  2. Cannot move subvol-based CT(s) from zfs or local storage to ceph rbd or nfs

    I'm having issues moving CT root fs from local storage (zfs) to ceph rbd. The problem only occurs on existing CT, created some time ago. At that time I was using proxmox 4.x The error does not occur on new CT created with proxmox 6.2, using ubuntu 20.x template. EDIT: It seems the issue is due...
  3. MFsys25 was nice. What next?

    Hi all, I once used an Intel Modular Server (MFSYS) in one of my deployments of proxmox (1.5!) and that was a nice little blade setup. That system is now 8 years old (running the latest proxmox 5) but needs replacement. Is there a similar blade setup to be bought today ? I fail to find a...
  4. Server DOS (huge load) on container disk full

    Hello, I'm experiencing a problem caused by disk quota exhaustion on a container. Simply put, when the quota is reached inside a container, further writes are be blocked, however, the writing process is not terminated but hangs indefinitely. When this happens, the load average on the physical...
  5. Cluster fails to restart/recover after network disconection

    Hi, I have a few proxmox nodes running in a cluster, with the cluster on a separate network interface. If a cluster node looses connection to this interface temporarily (like when a cable is disconnected and reconnected), the node will never see the other nodes until reboot. I tried issuing...
  6. [LXC Stability] Cached memory leaks (killing processes (wtf!))

    I am having stability issues with LXC containers after migration from OpenVZ What happens is that when the memory is all used in a container, the OOM kicks in and kills processes (see attached example). If I try to restart a killed process it will usually fail. A reboot of the container is...
  7. Proxmox 4.1 will not reboot server, kernel 4.2.8-1

    Hi, There's an issue with the latest proxmox 4.1. It will not reboot the servers (my servers at least), but will just stay in the "shutdown" state: To reproduce: #reboot .... ... Server has reached shutdown state. pveversion -v proxmox-ve: 4.1-39 (running kernel: 4.2.8-1-pve) pve-manager...
  8. Reducing migration time for LXC container

    With OpenVZ containers, we had two-step migration process, where on step 1 there was an initial rsync with a running container. Then the container was stopped, rsynched again and then started on the new node. This would shorted migration downtime by an order of magniture on big containers...
  9. SECURITY: LXC can read server dmesg

    I have recently upgraded a cluster from 3.4 to 4.1 There's a security issue with LXC that I would like to bring to your attention. Running dmesg inside a CT will show you the base server information. In some cases this reveals process info from other containers. I would not expect this to be...
  10. [SHEEPDOG]: Correct install

    Which way of installing a sheepdog cluster is correct: https://pve.proxmox.com/wiki/Sheepdog_cluster_install (manual) or https://pve.proxmox.com/wiki/Storage:_Sheepdog (proxmox-way). And which is the recommended way? I would prefer the latest source version from github and it's still not very...
  11. Kernel 4.x and Atom N2800 possible?

    Will it hang on reboot ? I now KVM is not possible without virt support, but will there be problems for LXC containers or even worse, a hangup during boot ? The N2800 is ok with 3.4 and kernel 2.6.x
  12. [SOLVED] Migration of LXC on ZFS looses ZFS underlying snapshots

    There's a bug (bad feature) when migrating LXC containers hosted on ZFS: It looses the snapshots. Longer explanation: I snapshot routinely all LXC containers for backup and replication. This is "a goog thing" and saved my azz a few times over the years. I discovered that proxmox will migrate...
  13. Optimizing ZFS backend performance

    Hello, I think I have a problem with ZFS performance, which is much below what I see advertised on the forum and considering the hardware I'm using. Unfortunately I cannot see the issue, so I hope that someone will be smarter than me. The problem is the IOPS I can get from a ZFS pool with 6...
  14. CGroups (or equivalent) inside a LXC container

    I have a question regarding the new lxc containers in 4.0. I would like to be able to run separate processes inside a LXC container, in their own cgroups, so that a single process cannot take down a container (or the server for that matter). On bare metal this is done using cgroups. On...
  15. Qemu exposing ACPI tables to guest

    I'm in a situation where I need some of the ACPI tables of the physical server to be exposed to the guest (for licensing purposes). I know about the SLIC patch for seabios, but it does not apply here. I have a media for windows 2012 server (HP) which refuses to boot on non-hp hardware...
  16. Proxmox 3.4 won't boot on ZFS and 440AR in HBA mode (HP GL380 Gen9 server)

    Hi, I've been trying to get proxmox 3.4 to boot from a native ZFS root, to no avail. I will boot from ext3 partition, but not ZFS. I don't see the grub prompt at all. Just a cursor and the server resets itself and starts the boot procedure over and over. This is an HP380 Gen9 using an HP 440ar...
  17. CT live migration errors

    I am experiencing occasional errors during live migration of openvz containers. Some times the container will fail to un-suspend after it has been copied to the new node. The errors I see in the kernel logs are: CPT ERR: ffff88072f840000,5102 :rst_file: failed to fix up file content: -22 CPT...
  18. cgroups for sharing resources between VM/CT and ceph/gluster

    Perhaps it would be a good idea to start using control groups by default for controlling resources allocated to ceph/gluster and kvm/ct virtual machines. What do you think?
  19. KVM disk corruption on glusterfs

    Hi, I'm having issues with disk corruption for glusterfs hosted kvm images. Simply, if a VM is running and I reboot one of the replicas holding the VM disk image, the running VM will start throwing disk errors and eventually die. On a reboot, I see disk corruption. I'm using cache=writetrough...
  20. KVM Disk corruption on glusterfs + libgfapi

    Sorry about duplicate. Thread is here: https://forum.proxmox.com/threads/18252-KVM-disk-corruption-on-glusterfs

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!