Search results

  1. G

    Can't restore or create LXC container on new PVE7 node

    Upon upgrading (reinstalling one-by-one) our cluster to PVE 7.0, I ran into the following problem: restoring an LXC container shows an error. recovering backed-up configuration from 'NFS:backup/vzdump-lxc-321-2021_11_12-04_38_58.tar.lzo' restoring...
  2. G

    Incorrect NVMe SSD wearout displayed by Proxmox 6

    I have recently installed four NVMe SSDs in a Proxmox 6 server as a RAIDZ array, only to discover that according to the web interface two of the drives exhibit huge wearout only after a few weeks of use: Since these are among the highest endurance consumer SSDs with 1665 TBW warranty for a...
  3. G

    Problems installing Ubuntu 18.04 LTS in KVM

    I have tried to install Ubuntu 18.04 LTS Server in a KVM machine on a recently updated Proxmox 5.4 host, but after entering the IPv4 address manually, and hitting "Save" the installer instantly restarts. Tried different hardware configurations (E1000 vs. VirtIO-net, disconnecting the adapter...
  4. G

    CephFS: How to create with different size?

    When creating a regular (RBD) Ceph pool, there are options in both the GUI and in pveceph to determine the size (replication count) and the min. size (online replicas for read) of the pool. However, when creating a CephFS pool, neither the GUI, not pveceph provides an option to create one with a...
  5. G

    Proxmox API questions about user data

    Let's say my username is "pveuser@pve". If I query ACCESS/USERS, I get all the user data I'm allowed to see, among it my own, but ACCESS/USERS/PVEUSER@PVE gives a 403 Forbidden error. Problem is I can't GET (or POST) ACCESS/USERS/PVEUSER@PVE to read (or write) my own data, unless I have the...
  6. G

    PVE 4.4 to 5.x upgrade problem

    I am upgrading our cluster, node by node from PVE 4.4 to 5 following the wiki: https://pve.proxmox.com/wiki/Upgrade_from_4.x_to_5.0 Several nodes upgraded perfectly, however on one node I get the following errors: # apt-get dist-upgrade Reading package lists... Done Building dependency tree...
  7. G

    ZFS 0.7.7 may cause data loss, Proxmox 5.1 just updated to it!

    According to the articles below, ZFS on Linux 0.7.7 has a disappearing file bug, and it is not recommended to be installed in a production environment: https://www.servethehome.com/zfs-on-linux-0-7-7-disappearing-file-bug/ https://news.ycombinator.com/item?id=16797932 My test Proxmox box that's...
  8. G

    general protection fault: 0000

    This keeps happening every few days on a single cpu Sandy Bridge box, running 3 Windows VMs on Proxmox 4.4. Can someone help to understand what's happening? Mar 28 19:42:55 proxmox6 kernel: [133407.284601] general protection fault: 0000 [#1] SMP Mar 28 19:42:55 proxmox6 kernel: [133407.284628]...
  9. G

    KVM guests freeze (hung tasks) during backup/restore/migrate

    This issue has been with us since we upgraded our cluster to Proxmox 4.x, and converted our guests from OpenVZ to KVM. We have single and dual socket Westmere, Sandy Bridge and Ivy Bridge nodes, using ZFS RAID10 HDD or ZFS RAIDZ SSD arrays, and every one of them is affected. Description When...
  10. G

    Ceph: Erasure coded pools planned?

    Ceph provides erasure coded pools for a several years now (was introduced in 2013), and according to many sources the technology is quite stable. (Erasure coded pools provide much more effective storage utilization for the same number of drives that can fail in a pool, quite similarly as RAID5...
  11. G

    vzdump error: guest-fsfreeze-freeze failed

    Wen backing up some KVM guests from ZFS to NFS, vzdump gives the following error: As you can see it takes exactly one hour until vzdump tries this freeze and fails many times, after that the backup completes in normal time. It only happens to a few VMs, most of them are not affected. Any...
  12. G

    Anyone using 10gbe mesh/ring network for Ceph?

    So there is a howto on the wiki that details the setup of a 10 Gbit/s Ethernet network without using a network switch: http://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server If I understand correctly, you would need a two port 10 Gbe NIC (or two NICs) in each of your nodes, you connect...
  13. G

    After unplanned reboot locked VMs don't start

    Occasionally we experience unplanned, spontaneous reboots on our Proxmox nodes installed on ZFS. The problem we are having is related to vzdump backups: if a reboot happens during an active vzdump backup that locks a VM, after reboot the locked guest will not start, and needs to be manually...
  14. G

    KVM disk has disappeared from Ceph pool

    I have a small 5 node Ceph (hammer) test cluster. Every node runs Proxmox, a Ceph MON and 1 or 2 OSDs. There are two pools defined, one with 2 copies (pool2), and one with 3 copies of data (pool3). Ceph has a dedicated 1Gbps network. There are a few RAW disks stored on pool2 at the moment...
  15. G

    Ceph (or CephFS) for vzdump backup storage?

    We have a small Ceph Hammer cluster (only a few monitors and less then 10 OSDs), still it proves very useful for low IO guest storage. Our Ceph cluster runs on our Proxmox nodes, but has it's own, separate gigabit LAN, and performance is adequate for our needs. We would like to use it as backup...
  16. G

    Multiple subnets on same interface?

    At the moment I have two ethernet ports in each cluster node, both of them connected to a bridge. eth0 > vmbr0 is 10.10.10.x and eth1 > vmbr1 is 192.168.0.x. I would like to create another bridge (vmbr2) connected to eth1 with the 172.16.0.x subnet, is this possible somehow?
  17. G

    Frequent CPU stalls in KVM guests during high IO on host

    Since we upgraded our cluster to PVE 4.3 from 3.4, all our OpenVZ containers have been converted to KVM virtual machines. In many of these guests we get frequent console alerts about CPU stalls, usually when the cluster node is under high IO load (for example when backing up or restoring VMs to...
  18. G

    PVE 4.3 two node cluster does not start after reboot

    After updating a two node cluster to 4.3, I have rebooted the nodes one by one (not at the same time). After reboot none of the VM's were running, trying to start them on any node gave a cluster error: root@proxmox2:~# qm start 111 cluster not ready - no quorum? Checking the cluster showed...
  19. G

    When will KVM live / suspend migration on ZFS work?

    Upon upgrading our cluster to PVE 4, I just realized that live migration of KVM guests on ZFS local storage (zvol) still does not work. Since vzdump live backups do work (presumably using ZFS snapshots), I wonder why it's not implemented for migration, and when is it expected? Is it on the...
  20. G

    SUGGESTION: Store VM NAME in vzdump backup filename

    I have an idea for an enhancement of vzdump: when creating a backup job, it would be great to have an option to store the guest's NAME in the backup filename (in addition to the VM ID). So with the option disabled the filenames would look unchanged: vzdump-qemu-240-2016_09_02-01_27_32.log...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!