Upon upgrading (reinstalling one-by-one) our cluster to PVE 7.0, I ran into the following problem: restoring an LXC container shows an error.
recovering backed-up configuration from 'NFS:backup/vzdump-lxc-321-2021_11_12-04_38_58.tar.lzo'
restoring...
I have recently installed four NVMe SSDs in a Proxmox 6 server as a RAIDZ array, only to discover that according to the web interface two of the drives exhibit huge wearout only after a few weeks of use:
Since these are among the highest endurance consumer SSDs with 1665 TBW warranty for a...
I have tried to install Ubuntu 18.04 LTS Server in a KVM machine on a recently updated Proxmox 5.4 host, but after entering the IPv4 address manually, and hitting "Save" the installer instantly restarts. Tried different hardware configurations (E1000 vs. VirtIO-net, disconnecting the adapter...
When creating a regular (RBD) Ceph pool, there are options in both the GUI and in pveceph to determine the size (replication count) and the min. size (online replicas for read) of the pool. However, when creating a CephFS pool, neither the GUI, not pveceph provides an option to create one with a...
Let's say my username is "pveuser@pve". If I query ACCESS/USERS, I get all the user data I'm allowed to see, among it my own, but ACCESS/USERS/PVEUSER@PVE gives a 403 Forbidden error.
Problem is I can't GET (or POST) ACCESS/USERS/PVEUSER@PVE to read (or write) my own data, unless I have the...
I am upgrading our cluster, node by node from PVE 4.4 to 5 following the wiki:
https://pve.proxmox.com/wiki/Upgrade_from_4.x_to_5.0
Several nodes upgraded perfectly, however on one node I get the following errors:
# apt-get dist-upgrade
Reading package lists... Done
Building dependency tree...
According to the articles below, ZFS on Linux 0.7.7 has a disappearing file bug, and it is not recommended to be installed in a production environment:
https://www.servethehome.com/zfs-on-linux-0-7-7-disappearing-file-bug/
https://news.ycombinator.com/item?id=16797932
My test Proxmox box that's...
This keeps happening every few days on a single cpu Sandy Bridge box, running 3 Windows VMs on Proxmox 4.4. Can someone help to understand what's happening?
Mar 28 19:42:55 proxmox6 kernel: [133407.284601] general protection fault: 0000 [#1] SMP
Mar 28 19:42:55 proxmox6 kernel: [133407.284628]...
This issue has been with us since we upgraded our cluster to Proxmox 4.x, and converted our guests from OpenVZ to KVM. We have single and dual socket Westmere, Sandy Bridge and Ivy Bridge nodes, using ZFS RAID10 HDD or ZFS RAIDZ SSD arrays, and every one of them is affected.
Description
When...
Ceph provides erasure coded pools for a several years now (was introduced in 2013), and according to many sources the technology is quite stable. (Erasure coded pools provide much more effective storage utilization for the same number of drives that can fail in a pool, quite similarly as RAID5...
Wen backing up some KVM guests from ZFS to NFS, vzdump gives the following error:
As you can see it takes exactly one hour until vzdump tries this freeze and fails many times, after that the backup completes in normal time.
It only happens to a few VMs, most of them are not affected. Any...
So there is a howto on the wiki that details the setup of a 10 Gbit/s Ethernet network without using a network switch:
http://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server
If I understand correctly, you would need a two port 10 Gbe NIC (or two NICs) in each of your nodes, you connect...
Occasionally we experience unplanned, spontaneous reboots on our Proxmox nodes installed on ZFS. The problem we are having is related to vzdump backups: if a reboot happens during an active vzdump backup that locks a VM, after reboot the locked guest will not start, and needs to be manually...
I have a small 5 node Ceph (hammer) test cluster. Every node runs Proxmox, a Ceph MON and 1 or 2 OSDs. There are two pools defined, one with 2 copies (pool2), and one with 3 copies of data (pool3). Ceph has a dedicated 1Gbps network. There are a few RAW disks stored on pool2 at the moment...
We have a small Ceph Hammer cluster (only a few monitors and less then 10 OSDs), still it proves very useful for low IO guest storage. Our Ceph cluster runs on our Proxmox nodes, but has it's own, separate gigabit LAN, and performance is adequate for our needs.
We would like to use it as backup...
At the moment I have two ethernet ports in each cluster node, both of them connected to a bridge.
eth0 > vmbr0 is 10.10.10.x and eth1 > vmbr1 is 192.168.0.x.
I would like to create another bridge (vmbr2) connected to eth1 with the 172.16.0.x subnet, is this possible somehow?
Since we upgraded our cluster to PVE 4.3 from 3.4, all our OpenVZ containers have been converted to KVM virtual machines. In many of these guests we get frequent console alerts about CPU stalls, usually when the cluster node is under high IO load (for example when backing up or restoring VMs to...
After updating a two node cluster to 4.3, I have rebooted the nodes one by one (not at the same time). After reboot none of the VM's were running, trying to start them on any node gave a cluster error:
root@proxmox2:~# qm start 111
cluster not ready - no quorum?
Checking the cluster showed...
Upon upgrading our cluster to PVE 4, I just realized that live migration of KVM guests on ZFS local storage (zvol) still does not work. Since vzdump live backups do work (presumably using ZFS snapshots), I wonder why it's not implemented for migration, and when is it expected? Is it on the...
I have an idea for an enhancement of vzdump: when creating a backup job, it would be great to have an option to store the guest's NAME in the backup filename (in addition to the VM ID).
So with the option disabled the filenames would look unchanged:
vzdump-qemu-240-2016_09_02-01_27_32.log...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.